Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

Pay Attention To Anti-Virus Logs

I’m often quite critical of anti-virus and it’s poor ability to actually detect most of the viruses that a computer is likely to see in normal operation.  Anti-virus can detect what it can detect, and that means that generally if the AV engine detects malware, the malware was probably blocked from getting a foot hold on the computer.  In my experience, that has lead to apathy towards anti-virus logs: like watching blocked firewall logs, AV logs show you what was successfully blocked.  As I’ve mentioned on my cyber security podcast a number of times, there are a few important reasons to pay attention to those AV logs.

First, AV logs that show detected malware on servers, particularly where the server is not a file server, should prompt some investigation.  Frequently, some of the tools an attacker will try to push to a target server will be caught by an AV engine and deleted or quarantined.  The attacker may have to iterate through a few different tools to find one that is not detected prior to moving forward in the attack.  Paying attention to AV logs in this circumstance provides an opportunity to identify an attack during the early stages.   I’ve seen this technique most effectively used on Internet facing web servers, where almost any AV detection is bound to be an indication of an active attack.

Second, on workstations , AV detection events will necessarily be more common than on non-interactive servers, due to the  nature of email attachments, web browsing, downloads, USB drives and so on.  In this case, it is more reasonable to accept that AV blocked a particular piece of malware, and generally unworkable to chase after each detected event.  However, there are two opportunities to leverage AV logs in this circumstance to shut down infections.  If a particular workstation is detecting many pieces of malware over a relatively short time, this may be an indication that the person using the workstation is doing something inappropriate or that the system has some other undetected malware infection and AV is catching some second order infection attempt.  In either case, the workstation likely deserves a look.

Additionally, on workstations, certain kinds of malware detection events uncovered during full drive scans should warrant a look at the computer.  Frequently, a piece of malware will not be detected at first, but as other organizations find and submit samples of the malware, AV detection will improve and a previously undetected infection is suddenly detected.

I think it’s important to reiterate that AV is not all that effective at preventing malware infections, however most of us have significant investments in our AV infrastructure and we ought to looking for ways to ensure we are getting the best leverage out of the tools that we have deployed in our environments.

Have you found a clever way to use AV?  Post a message below.

Something is Phishy About The Russian CyberVor Password Discovery

If you’re reading this, you are certainly aware of the story of Hold Security’s recent announcement of 1,200,000,000 unique user ID and passwords being uncovered.

I’m not going to pile on to the stories that assert this is a PR stunt by Hold.  In fact, I think Hold has done some great things in the past, in conjunction with Brian Krebs in uncovering some significant breaches.

However, there are a few aspects of Hold’s announcement that just don’t make sense… At least to me:

The announcement is that 1.2B usernames and passwords were obtained through a combination of pilfering other data dumps – presumably from the myriad of breaches we know of, like eBay, Adobe, and so on, but also from a botnet that ran SQL injection attacks on web sites visited by the users of infected computers which apparently resulted in database dumps from many of those web sites.  420,000 of them, in fact.

That seems like a plausible story.  The SQL injection attack most likely leveraged some very common vulnerabilities – probably in WordPress plugins or in Joomla or something similar.  However, nearly all of the passwords obtained, certainly the ones from the SQL injection attacks, would be hashed in some manner.  Even the Adobe and eBay password dumps were at least “encrypted” – whatever that means.

The assertion is that there were 4.5B “records” found, which netted out to 1.2B unique credentials, belonging to 500M unique email addresses.

I contend that this Russian gang having brute forced 1.2B hashed and/or encrypted passwords is quite unlikely.  The much more likely case is that the dump contains 1.2B email addresses and hashed or encrypted passwords…  Still not a great situation, but not as dire as portrayed, at least for the end users.

If the dump does indeed have actual plain text passwords, which again is not clear from the announcement, I suspect the much more likely source would be phishing campaigns and/or keyloggers, potentially run by that botnet.  However, I believe that Hold would probably have seen evidence if that were the case and would most likely have said as much in the announcement, since it would be an even more interesting story.

Hold is clearly in communication with some of the organizations where records were stolen from ,as indicated in the announcement.  What isn’t clear is whether all of the recognizable organizations were attempted to be contacted, or only the largest, or only those that had a previous agreement in place with Hold.  Certainly Hold has found an interesting niche and is attempting to capitalize on it – and that makes sense to me.  However, it’s going to be a controversial business model that requires organizations to pay Hold in order to be notified if or when Hold finds evidence that the organization’s records have been found.  I’m not going to pass judgement yet.

Perspective on the Microsoft Weak Password Report

Researchers at Microsoft and Carleton University released a report that has gotten a lot of attention, with media headlines like “Why 123456 is a great password”.

The report is indeed interesting: mathematically modelling the difficulty of remembering complex passwords and optimizing the relationship between expected loss resulting from a breached account and the complexity of passwords.

The net finding is that humans have limitations on how much they can remember, and that is at odds with the current guidance of using a strong, unique password for each account.  The suggestion is that accounts should be grouped by loss characteristics, with those accounts that have the highest loss potential getting the strongest password, and the least important having something like “123456”.

The findings of the report are certainly interesting, however there seem to be a number of practical elements not considered, such as:

  • The paper seems focused on the realm of “personal use” passwords, however many people have to worry about both passwords for personal use and for “work” use.
  • Passwords used for one’s job usually have to be changed every 90 days, and are expected to be among the most secure passwords a person would use.
  • People generally do not invest much intellectual energy into segmenting accounts into high risk/low risk when creating passwords.  Often, password creation is done on the fly and stands in the way of some larger, short term objective, such as ordering flowers or getting logged in to send an urgent email to the boss.
  • The loss potential of a given account is not always obvious.
  • The loss potential of a given account likely does not remain constant over time.
  • There are many different minimum password requirements across different services that probably work against the idea of using simple passwords on less important sites.  For example, I have a financial account that does not permit letters in the password, and I have created accounts on trivial web forums that require at least 8 character passwords, with complexity requirements.

It’s disappointing that password managers were dismissed by the report authors as too risky because they represent a concentration of passwords which could itself fall victim to password guessing attacks, when hosted “in the cloud”, leading to the loss of all passwords.  Password managers seem to me as the only viable alternative to managing the proliferation of passwords many of us need to contend with.  Using password managers removes the need to consider the relative importance of a new service and can create random, arbitrarily long and complex passwords on the fly, without needing to worry about trying to remember them – for either important or unimportant accounts.

Now, not all password managers are created equally.  We recently saw a flurry of serious issues with online password managers.  Certainly diligence is required when picking a password manager, and that is certainly not a simple task for most people.  However, I would prefer to see a discussion on how to educate people on rating password managers than encouraging them to use trivial passwords in certain circumstances.

I don’t mean to be overly critical of the report.  I see some practical use for this research by organizations when considering their password strategies.  Specifically, it’s not reasonable to expect employees to pick strong passwords for a business-related of accounts and then not write them down, record them somewhere, or create a predictable system of passwords.  It gets worse when those employees are also expected to change their passwords every 90 days and to use different passwords on different systems.  Finally, those same employees are also having to remember “strong” passwords for some number of personal accounts which adds more complexity to remembering more strong passwords.

In short, I think that this report highlights the importance of using password managers, both for business and for personal purposes.  And yes, I am ignoring multi-factor authentication schemes which, if implemented properly, would be a superior solution.

Why Changing Passwords Might Be A Good Idea After A Data Breach

During my daily reading today, I found this article titled Why changing passwords isn’t the answer to a data breach.  The post brings up a good point: breached organizations would serve their customers or users better if they gave more useful guidance after a breach, rather than just “change your passwords”.  The idea presented by the author is providing recommendations on how to pick a strong password, rather than simply changing it.

I think the author missed an important point though: it’s proving to be a bad idea to use the same password on different sites, no matter the strength of the password.  Possibly if customers or users had an indication of how the passwords were stored on a given site or service, they could make a judgement call of whether to use their strong password or to create a separate password for that site alone.  However, that’s not the world we live in.  We don’t normally get to know that the site we just signed up for stores passwords in plain text or as an md5 hash with no salt.

Passwords should be strong AND unique across sites, but those goals are seemingly at odds.  The passwords we see in password dumps are short and trivial for a reason: they are easy to remember!  If we want someone to create a password like this: co%rre£ctho^rseba&tteryst(aple, we have to accept that the average person is either not going to do it because it’s too hard to remember, or if they can remember it, that’ll be their password across sites – until, of course, they hit on a site that won’t accept certain characters.

While the “best” answer is some form of multi-factor authentication, though it is by no means perfect.  The major problem with multi-factor authentication is that the services we use have to support it.  The next best thing is a password manager.  Password managers let users create a strong and unique password for each service and doesn’t require the person to remember multiple hard to crack passwords.  Certainly password managers are not perfect, and the good ones tend not to be free, either.

So, I would really like to hear a breached organization who lost a password database to give encourage impacted users to use a strong, unique passwords on each site and to use a password manager.

Maybe we could see companies buying a year of 1Password or Lastpass* for affected customers rather than a year of credit monitoring.

One last thing that I want to mention: I hear time and again about how bad of an idea it is to pick a passphrase than consists of a series of memorable words, like “correcthorsebatterystaple” as presented in XKCD.   I’ve heard many hypotheses of why this is a bad idea, and the author points out that hashcat can make quick work of such a password.  However, this kind of idea is at the center of a password scheme called “Diceware”.  Diceware creates a password by rolling some dice to lookup a sequence of words in a dictionary.  It’s not tough to think that “correcthorsebatterystaple” could be the output of Diceware.  However, Diceware is indeed quite secure.  The trap I see most people fall into when disputing the approach is focusing on the number of words in the passphrase and intuition sensibly telling us that there are not all that many ways to arrange 3 or 4 words.  However, when you consider it mathematically, you realize the individual words should be thought of as just a character – a character in a very large set.  Consider that a 12 character password using a normal character set has 2^95 (~3×10^102) combinations.  A Diceware password with 4 words, using a dictionary of 7776 words, has 4^7776  (~4×10^4681) combinations.  Hopefully this will put the correcthorsebatterystaple story in a better light.

* yes, I’m aware Lastpass just announced some vulnerabilities.

I Think I Was Wrong About Security Awareness Training

Andy and I had a bit of a debate on the usefulness of security awareness training in episode 75 of our podcast. The discussion came up while covering a story about ransom campaigns and how the author recommends amping up awareness training to avoid malware and spear phishing, the two main avenues of attack for these attackers.

I was on the side of there being some benefit and Andy on the side of it not being worthwhile.

The logic goes like this: attackers are becoming so sophisticated, that it isn’t practical to expect a lay person to be able to identify these attacks – technical controls are really the only thing that is going to be effective.

My thinking, at the time, was that awareness training is like anti-virus: you should have it in place to defend against those things that it can, but we all know there are plenty of attacks it won’t stop. I think that is still a reasonable assumption.

However, I’ve since thought about it some, and in think Andy is probably right…

Awareness training is about trying to establish some firewall rules in minds of people in an organization. There’s an implicit hope that the training will avoid *some* number attacks and an understanding that it won’t catch all of them.

However, people aren’t wired to be a control point. There is a lot of research that demonstrates this point, notably in Dan Ariely’s “Predictably Irrational” books. Focus, attention, diligence and even ethics are influenced by many factors, and awareness training would need to compete against fundamental nature of people.

But it’s worse than just not effective, and that is why I think I’m wrong here. Awareness training *is* believed to be a security control by many. Awareness training is mandated by every security standard or framework I can think of, alongside antivirus, firewalls and the like. And because it is viewed as a control, we count on its effectiveness as part of our security program.

At least that is my intuition. I don’t have hard data to back it up, but that would be pretty enlightening experiment – if it were done correctly, meaning not through an opinion survey.

Educating employees on company policies is clearly necessary. However, it seems that focusing on hard controls rather than awareness education would be a better investment. Those are things like:

  • Two factor authentication or password managers and crazy password complexity requirements instead of trying teach what a strong password is
  • Controls to prevent the execution of malware delivered through email instead of how to recognize malicious files
  • Controls to prevent browsing to phishing sites or exploit kits instead of how to
  • And so on.

How Bad Administrator Hygiene Contributes To Breaches

I recently wrote about the problems associated with not understanding common attack techniques when designing an IT environment.  I consistently see another factor in breaches: bad hygiene.  This encompasses things such as:

  • Missing patches
  • Default passwords
  • Weak passwords
  • Unmanaged systems
  • Bad ID management practices

My observation is that, at least in some organizations, many of these items are viewed as “compliance problems”.  Administrators don’t often see the linkage between bad hygiene and security breaches. For the most part, these hygiene problems will not enable an initial compromise, though they certainly do from time to time.  What I see much more frequently is that some unforeseen mechanism results in an initial intrusion, such as SQL injection, spear phishing or file upload vulnerability, and the attacker then leverages bad administrator hygiene to move more deeply in an environment.

Most man-made disasters are not the product of a single problem, but rather a chain of failures that line up just right.  In the same way, many breaches are not the result of a single problem, but rather a number of problems that an attacker can uncover and exploit to move throughout an organization’s systems and ultimately accomplish their objective.

It’s important for network, server, application and database administrators to understand the implications of bad hygiene. Clearly, improving awareness doesn’t guarantee better diligence by those administrators.   However, drawing a more clear linkage between bad hygiene and their security consequences, rather than simply raising the ire of some auditors for violating a nebulous policy, should make some amount of improvement.  That is my intuition, anyhow.

Security awareness is a frequently discussed topic in the information security world. Such training is almost exclusively thought of in the context of training hapless users on which email attachments to not open.  Maybe it’s time to start educating the IT department on contemporary security threats and attacker tactics so that they can see the direct connection between their duties and the methods of those attackers.

Threat Modeling, Overconfidence and Ignorance

Attackers continue to refine their tools and techniques and barely a day goes by without news of some significant breach.  I’ve noticed a common thread through many breaches in my experience of handling dozens of incidents and researching many more for my podcast: the organization has a fundamental misunderstanding risks associated with the technology they have deployed and, more specifically, the way in which they deployed it.

When I think about this problem, I’m reminded of Gene Kranz’s line from Apollo 13: “I don’t care what anything was designed to do.  I care about what it can do.”

My observation is that there is little thought given to how things can go wrong with a particular implementation design.  Standard controls, such as user ID rights, file permissions, and so on, are trusted to keep things secure.  Anti-virus and IPS are layered on as supplementary controls, and systems are segregated onto functional networks with access restrictions, all intended to create defense in depth.  Anyone familiar with the technology at hand and who is moderately bright can cobble together what they believe is a robust design.  And the myriad of security standards will tend to back them up, by checking the boxes

  • Firewalls in place? Check!
  • Software up-to-date? Check!
  • Anti-Virus installed and kept up-to-date? Check!
  • User ID’s properly managed? Check!
  • Systems segregated onto separate networks as needed? Check!

And so on.  Until one fateful day, someone notices by accident, a Domain Admin account that shouldn’t be there.  Or a call comes in from the FBI about “suspicious activity” happening on the organization’s network.  Or the Secret Service calls to say that credit cards the organization processed were found on a carder forum.  And it turns out that many of the organization’s servers and applications have been compromised.

In nearly every case, there were a mix of operational and architectural problems that contributed to the breach.  However, the operational issues seem to be transitive: maybe it’s poorly written ASP.net code that allows file uploads, or maybe someone used Password1 as her administrator password, and so on.  But the really serious contributor to the extent of a breach is architectural problems.  This involves things like:

  • A web server on an Internet DMZ making calls to a database server located on an internal network.
  • A domain controlled on an Internet DMZ with 2 way access to other DC’s on other parts of the internal network.
  • Having a mixed Internet/internal server DMZ, where firewall rules govern what is accessible from the Internet.

…And so it goes.  The number of permutations of how technology can be assembled seems nearly infinite.  Without an understanding of how the particular architecture proposed or in place can be leveraged by an attacker, organizations are ignorant of the actual risk to their organization.

For this reason, I believe it is important that traditional IT architects responsible for developing such environments have at least a conceptual understanding of how technology can be abused by attackers.  Threat modeling is also a valuable activity to uncover potential weaknesses, however doing so still requires people who are knowledgeable about the risks.

I also seem some value in establishing common “design patterns”, similar to that seen in programming, but at a much higher level, involving networked systems and applications, where well thought out designs could be starting point for tweaking, rather than starting from nothing and trying to figure out the pitfalls with the new design along the way.  I suspect that would be difficult at best, given the extreme variability in business needs, technology choices and other constraints.

It’s All About The Benjamins… Or Why Details Matter

A team at Carnegie Mellon released a report a few weeks back, detailing the results of an experiment to determine the how many people would run a suspicious application at different levels of compensation. The report paints a pretty cynical picture of the “average” Internet user, which generally meshes with our intuition on.   Basically, the vast majority of participants ran suspicious code for $1, even though they knew it is dangerous to do so.  Worse, a significant number of participants ran the code for the low, low price of one cent.  This seems to paint a pretty dire picture, in the same vein as previous research where subjects gave up passwords for a candy bar.

However, I noticed a potential problem wit the report.  The researchers relied on Amazon’s Mechanical Turk service to find participants for this study.  When performing studies like this, it’s important that the population sampled are representative of the population which the study is intending to estimate.  If the population sampled is not representative of the broader population, the results will be unreliable for estimating against the broader population.

Consider this scenario: I want to estimate the amount of physical activity the average adult in my city gets per day.  So, I set  up a stand at the entrance to a shopping center where there is a gym and I survey a those who enter the parking lot.  With this methodology, I will not end up with an average amount of physical activity for the city, because I have skewed the numbers by setting up shop near a gym.  I will only be able to estimate the amount of physical activity for those people who frequent this particular shopping center.

The researchers cite a previous study which determined that the “workers” of Mechanical Turk are more or less representative of the average users of the Internet at large based on a number of demographic dimensions, like age, income and gender.

I contend that this is akin to finding that the kinds of stores in my hypothetical shopping center draw a representative sample of the city as a whole, based on the same demographic dimensions, and in fact I see that result in my parking lot survey.  However, my results are still unreliable, even though the visitors are, in fact, representative of the city.  Why is that?  Hours of physical activity is orthogonal (mostly) to the demographic dimensions I checked: income, age and gender.

In the same fashion, I contend that while the demographics of Mechanical Turk “workers” match that of the average Internet user, the results are similarly unreliable for estimating all Internet users.  Mechanical Turk is a concentration of people who are willing perform small tasks for small amounts of money.  I propose that the findings of the report are only representative of the population of Mechanical Turk users, not of the general population of Internet users.

It seems obvious that the average Internet user would indeed fall victim to this at some price, but we don’t know for sure what percentage and at what price points.

I still find the report fascinating, and it’s clear that someone with malicious intent can go to a market place like Mechanical Turk and make some money by issuing “jobs” to run pay per install malware.

Game Theory, Behavioral Economics and Anti-Virus

The information security community continuously laments the ineffectiveness of anti-virus lately.  Report after report indicate that AV catches only between 5% and 55% of malware.  Can any organization justify the cost for such a generally ineffective control?  Symantec themselves has even stated that  the usefulness of AV is waning.

However, when the bill comes for next year’s maintenance on your chosen AV platform, you’re going to pay it, aren’t you?  And so will nearly everyone else.

Why is that?  Behavioral economists categorize a number of cognitive biases in human psychology, such as “herd mentality”.  I suspect that we are inclined to “do what everyone else is doing”, which is indeed to keep AV around.  Another bias is the “sunk cost fallacy”.  We spent a lot of money deploying AV and have spent a lot of money each year since to keep it fed and cared for.  Abandoning AV will be turning our back on the investment we’ve made, even if it would save us money now.

I think that there may be an even stronger game theoretic force at play here.  If I am responsible for security at my organization, I have many factors to consider when prioritizing my spending.  I may fully believe that AV will not provide additional malware protection beyond other controls in place, and therefore I could reallocate the savings from not using AV to some more productive purpose.  However, if there IS an incident involving malware at my organization and I made the choice to not use AV, even if AV  wouldn’t have stopped it, or if the damages suffered were much less than the savings from not using AV, I am probably going to be working on my resume.  Or I assume that I will.

I suspect this is a similar reason why we will not see requirements for AV relaxed in various security standards and frameworks any time soon.  From the perspective of a standards body, there is only downside in removing that requirement:

  • The AV industry, and probably others, may ridicule the standard for not prescribing the mainstay of security controls, which they have a financial incentive to keep in place
  • Organizations following the standard that have malware-related losses may point back to the standard and call it ineffective
  • The standards body generally will not incur costs resulting from including a given control, so removing AV as a requirement is not sensible since it does catch some amount of malware, however small

 

You might be asking: “what exactly are you getting at here?”  I’m not proposing that you, or anyone else dump AV.  I am proposing that we question why things are being done the way they are. As defenders, we have a limited amount of money and time to spend, and we ought to ensure we are prioritizing our security controls based on effectiveness at mitigating risk to our systems and data and not just because it’s what everyone else is doing.

I’ll also say that, if we’re not willing to dump AV, we ought to (at least from time to time) change the nature of the discussions and criticisms of AV into something productive.  For example, if AV is mandatory and it’s not all that effective, we ought to be purchasing the most economical product to save money for other endeavors.  Rather than simply comparing effectiveness rates, we could be considering the cost of effectiveness rates per user.  If I am paying $50/year/user for an AV platform that is 35% effective, it would be good to know that I could pay $25/year/user for one that is 30% effective.  This assumes, of course, that we settle on a standard methodology for rating the effectiveness of AV, which seems like a challenge on its own.