I recently finished listening to the book titled “Suggestible You”. The book is fascinating overall, but one comment the author made repeatedly is that the human brain is a “prediction machine”. Our brains are hardwired to make constant snap predictions about the future as a means of surviving in the world.
That statement got me to thinking about IT security, as most things do. We make predictions based on our understanding of the world… can I cross the street before that car gets here? I need to watch out for that curb. If I walk by the puddle, a car will probably splash me… and so on. IT risk assessments are basically predictions, and we are generally quite confident in our ability to perform such predictions. We need to recognize, however, that our ability to predict is limited by our previous experiences and how often those experiences have entered our awareness. I suspect this is closely related to the concept of availability bias in behavioral economics, where give more weight to things that are easier to bring to mind.
In the context of an IT risk assessment, limited knowledge of different threat scenarios is detrimental to a quality result. Our challenge then is that the threat landscape has become incredibly complex meaning that it’s difficult, and possibly just not practical, to know about and consider all threats to a given system. And consider that we generally are not aware of our blind spots: we *think* we have enumerated and considered all of the threats in a proper order, but we have not.
This thought drives me back to the concept of standard “IT building blocks“, that have well documented best practices, risk enumerations, and interfaces with other blocks. It’s a highly amorphous idea right now, but I don’t see a better way to manage the complexity we are currently faced with.
Thoughts appreciated. More to come as time permits.