I have a lot of opportunities to see and think about how IT security disasters play out. I talk a lot about how to help avoid these on the Defensive Security Podcast, and I’ve written a good bit here in infosec.engineering. There are many causes of weaknesses that lead to intrusions and data breaches such as underinvestment, bad processes, malicious or careless insiders, and so on. I am growing more convinced, though, that a significant factor, particularly in the extent of intrusions, arises from poor design of IT systems.
IT environments in average organizations are exceedingly complex, in terms of the number of entry points and paths that an adversary can use for lateral movement (see Thinking Graphically to Protect Systems). There is little formalized guidance on how to design an IT environment, and much of the time, there are nearly unlimited ways of connecting and arranging things in a way that “works”, in terms of meeting the core intent of the system. We are at the mercy of the imagination of the architects who design these systems to foresee the potential for abuse and properly design in concepts like least privilege. Most of the time, the people working in those design roles aren’t familiar with many of the techniques that adversaries use.
Much of what we do in IT security is treat the symptoms of the underlying disease; the disease being badly designed IT. We try to apply increasingly sophisticated antivirus software, next-gen firewalls, and so on, to mitigate risks to the environment. To make our environments more resilient, we need to spend some time tackling the disease. It’s extremely difficult to make fundamental or large-scale changes to existing IT. At the same time, IT in most organization is a constantly changing organism, meaning there are likely opportunities to inject more robust design patterns incrementally. By that, I mean that we are generally always upgrading some component, optimizing or redesigning some aspect of the network, and so on. There are fortuitous changes happening in the IT landscape, such as the move to cloud, which may present opportunities to make fundamental improvements. Some organizations end up in the unfortunate position of having an opportunity to start over – such as was the case with Sony Pictures, and many of the victims of the NotPetya worm.
As I previously mentioned, my experience, and the experiences of many colleagues I’ve discussed this with, is that failure modes (of the malicious intruder sort) are often not considered, or are only given passing thought because of a variety of factors including schedule and cost limitations, ignorance of threat actor techniques, and the fact that IT is an art form, and sometimes IT people just want to be artists.
It seems to me that the industry would do well to establish modular IT infrastructure design patterns that are both very specific in terms of configuration, scalable, and laid out in such a way that various building blocks can be linked together to form the foundational IT infrastructure. There may be building blocks that are effectively “frameworks” (though not in the manner of the NIST Cyber Security Framework) where oddball or specific systems and applications operate standard, This would become a set of design patterns that are cost efficient, resilient, and modeled after best practices and tuned based on changes in technology and new understanding of deficiencies in the original design.
The idea here is to develop an approach that removes much of the design weaknesses in organizational IT environments by providing an objective set of “tried and true” design patterns that IT people can use, rather than designing a half-assed, difficult to secure environment because those IT people are ignorant of how their custom designs can be exploited. I see a lot of parallels here to encryption (though I admit it is a tenuous comparison): it’s mostly accepted in the IT world that designing your own custom encryption scheme is a bad idea, and that the most effective approach to encryption is using accepted standards, like AES, that people a lot smarter than the average IT person has designed and demonstrated to be robust. Also like encryption algorithms, IT environments tend to be vastly complex, and weaknesses difficult to spot to an IT layperson. We will get the occasional DES and Dual EC DRBG, but that risk seems far preferable to creating something custom that is easy to break.
The move to cloud, virtualization, and infrastructure as code provide an excellent opportunity for such a concept to be used by IT teams with minimal effort, if these design patterns exist as Vagrant/Ansible and SDN style configuration files that can be tailored to meet specific needs, particularly in the area of scale, dispersion across locations, and so on.
Is anyone working on such a thing? If not, is there any interest in such a thing?