A great tweet in my feed recently tied surviorship bias with IT security:
An analogy in Infosec is the analyst that sees only incoming attacks against their web servers & therefore decides that purchasing a WAF should have highest priority pic.twitter.com/NxHXIlhekp
— Florian Roth (@cyb3rops) March 2, 2019
Books on cognition and cognitive biases often reference this somewhat famous bit of thinking by a man named Abraham Wald. If you are interested, you can read more details about Wald’s observation in situation here.
In this case, planes with certain areas damaged never made it back, and so no (or few) planes analyzed had damage to those areas. All of the damage observed happened on parts of the plane that could be damaged while allowing the plane to return.
Nassim Taleb called this “silent evidence” in his book “Black Swan”. Arthur Conan Doyle paid a bit of homage to this phenomenon in to the plot of a Sherlock Holmes story where someone was murdered, but upon questioning, the neighbors didn’t recall hearing the victim’s dog barking. The absence of a barking dog in this Homles’ story indicated that the perpetrator was someone the dog knew well.
As Florian points out in his tweet, we often react to the data we have, rather than being aware that we are unaware of missing data when we form a hypothesis or make a recommendation, such as investing in a new WAF in the face of a dearth of logs indicating attacks on a web server. It’s a great point: the decision to buy a WAF is made without the benefit of knowing which of the myriad other attack vectors are being used, possibly successfully, against the organization, because there is no log information.
This raises a complex question: how do we know what we don’t know? Ultimately, we as security people, have to take action and don’t have the luxury of philosophizing on the nature of uncertainty. We must make decisions under uncertainty, often quickly. What to do?
Here is the way I would approach this: I may have logs indicating persistent attacks on our web site, but the question I would ask is whether we have evidence that any of those attacks are successful, or will likely be successful. There’s nothing surprising about a public web site being attacked – everything that has an internet routable IP address is constantly being scanned, probed, infected, and/or compromised. Since I do not have unlimited money to spend on security, I have to assess whether the web server is the most pressing thing to address. I need to consider what data I’m missing. In this case, I’m missing all of the other data about all other types of attacks that might be happening. Am I able to detect those attacks? If not, would I know if any were successful?
When approaching a problem like this, it’s good to start with the basics. If I am not able to detect various types of attacks, it’s tough to prioritize where to implement control enhancements, therefore a good place to start is with improving visibility using tools such as EDR, antivirus, IDS, and so on, depending on the situation. It’s been my experience that many organizations are simply unable to detect successful attacks, and so live blissfully ignorant of their data walking out the door. The act of enhancing visibility into such attacks often identifies serious problems that need to be addressed quickly. It’s at this point that I can compare the importance of a WAF against some other control.
Enhancing visibility doesn’t (necessarily) lead to improved controls as quickly as, say, running out and buying a WAF, but the visibility enhancements will help with prioritization and with building a business case for funding additional security controls. The investment in visibility is not throw-away: even after new preventive controls are in place, the ability to detect malicious activity is still vitally important, and can help refine other controls or identify shifts in adversarial tactics.
One problem I’ve experiences with improving visibility during my career is that, at least for a period of time, the perceived security of a company seems to take a serious turn for the worse because I’m detecting things that were happening previously, but which we simply didn’t know was happening. Any time we engage in such a program, it’s important to set expectations appropriately.
For more reading on this, I recommend the following books:
- How to Measure Anything
- How to Measure Anything in Cybersecurity Risk
- The Failure of Risk Management
- Measuring and Managing Information Risk: A FAIR Approach