Attackers continue to refine their tools and techniques and barely a day goes by without news of some significant breach. I’ve noticed a common thread through many breaches in my experience of handling dozens of incidents and researching many more for my podcast: the organization has a fundamental misunderstanding risks associated with the technology they have deployed and, more specifically, the way in which they deployed it.
When I think about this problem, I’m reminded of Gene Kranz’s line from Apollo 13: “I don’t care what anything was designed to do. I care about what it can do.”
My observation is that there is little thought given to how things can go wrong with a particular implementation design. Standard controls, such as user ID rights, file permissions, and so on, are trusted to keep things secure. Anti-virus and IPS are layered on as supplementary controls, and systems are segregated onto functional networks with access restrictions, all intended to create defense in depth. Anyone familiar with the technology at hand and who is moderately bright can cobble together what they believe is a robust design. And the myriad of security standards will tend to back them up, by checking the boxes
- Firewalls in place? Check!
- Software up-to-date? Check!
- Anti-Virus installed and kept up-to-date? Check!
- User ID’s properly managed? Check!
- Systems segregated onto separate networks as needed? Check!
And so on. Until one fateful day, someone notices by accident, a Domain Admin account that shouldn’t be there. Or a call comes in from the FBI about “suspicious activity” happening on the organization’s network. Or the Secret Service calls to say that credit cards the organization processed were found on a carder forum. And it turns out that many of the organization’s servers and applications have been compromised.
In nearly every case, there were a mix of operational and architectural problems that contributed to the breach. However, the operational issues seem to be transitive: maybe it’s poorly written ASP.net code that allows file uploads, or maybe someone used Password1 as her administrator password, and so on. But the really serious contributor to the extent of a breach is architectural problems. This involves things like:
- A web server on an Internet DMZ making calls to a database server located on an internal network.
- A domain controlled on an Internet DMZ with 2 way access to other DC’s on other parts of the internal network.
- Having a mixed Internet/internal server DMZ, where firewall rules govern what is accessible from the Internet.
…And so it goes. The number of permutations of how technology can be assembled seems nearly infinite. Without an understanding of how the particular architecture proposed or in place can be leveraged by an attacker, organizations are ignorant of the actual risk to their organization.
For this reason, I believe it is important that traditional IT architects responsible for developing such environments have at least a conceptual understanding of how technology can be abused by attackers. Threat modeling is also a valuable activity to uncover potential weaknesses, however doing so still requires people who are knowledgeable about the risks.
I also seem some value in establishing common “design patterns”, similar to that seen in programming, but at a much higher level, involving networked systems and applications, where well thought out designs could be starting point for tweaking, rather than starting from nothing and trying to figure out the pitfalls with the new design along the way. I suspect that would be difficult at best, given the extreme variability in business needs, technology choices and other constraints.
One thought on “Threat Modeling, Overconfidence and Ignorance”