How do we know something works? The debate about security awareness training continues to drag on, with proponents citing remarkable reductions in losses when training is applied and detractors pointing out that training doesn’t actually stop employees from falling victim to common security problems. Why is it so hard to tell if security awareness training “works”? Why do we continue to have this discussion?
My view is, as I’ve written previously, cyber security is an art, not a science. We collectively “do stuff” because we think it’s the right thing to do. One night last week, over dinner I was talking to my friend Bob, who works for a large company. His employer recently performed a phishing test of all employees after each receiving training on identifying and avoiding phishing emails. Just over 20% of all employees fell for the test after being trained. I ask Bob how effective his company found the training was at reducing the failure rate. He didn’t know, since there wasn’t a test performed prior to the training. That’s a significant opportunity to gain insight into the value of training lost.
Bob’s company spent a considerable amount of money on the training and the test, but they don’t know if the training made a difference, and if so, by how much. Would 60% of employees had fallen for the phishes prior to training? If so, that would likely indicate the training was worthwhile. Or would only 21% have fallen for it, and the money spent on the training would have been much better spent on some other program to address the risks associated with phishing? Should Bob’s employer run the training again this year? If they do, at least they will be able to compare the test results to last year’s results and hopefully derive some insight into the effectiveness of the program.
But that is not the end of the story. We do not have only two options available to us: to train or not to train. There are many, many variations, on the content of the training, the delivery mechanism, the frequency, and the duration, to name a few. Security awareness training seems to be a great candidate for randomized control tests. Do employees who are trained cause less security related problems than those who are not trained? Are some kinds of training more effective than other kinds of training? Do some kinds of employees benefit from training or specific types of training more than other types of employees? Is the training effective against some kinds of attacks and not others, indicating that the testing approach should be more comprehensive?
I don’t know because we either don’t do this kind of science, or we don’t talk about it if we are doing it. Instead, we impute benefits from tangentially related reports and surveys interpreted by vendors who are trying to impart the importance of having a training regiment, or by vendors who are trying to impart the importance of a technical security solution.
My own view by the way, which is fraught with biases but based on experience, is that security awareness training is good for reducing the frequency of, but not eliminating, employee-induced security incidents. Keeping this in mind serves two important purposes:
- We understand that there is significant risk which must be addressed despite even the best security training.
- When an employee is the victim of some attack, we don’t fall into the trap of assuming the training was effective and the employee simply wasn’t paying attention or chose to disregard the training delivered.
We wring our hands about so many aspects of security: how effective is anti-virus and is it even worth the cost, given it’s poor track record? Does removing local administrator rights really reduce the instances of security incidents? How fast do we need to patch our systems?
These are all answerable questions. And yes, the answers often rely at least in part on specific attributes of the environment they operate in. But we have to know to ask the questions.