Speed of Patching Versus Breach Likelihood

I am a big fan of the Verizon DBIR.  I was just reading this interview with Mike Denning  from Verizon on Deloitte’s web site about this year’s report.  The whole article is worth reading, but I want to focus on one comment from Mr. Denning:

One of the biggest surprises was the finding that 99.9 percent of the exploited vulnerabilities had occurred more than a year after a patch, which quite possibly would have prevented them, had been published. Organizations are finding it difficult to maintain the latest patch releases. Additionally, the finding speaks to the challenges of endpoint security.

Today, coverage is more important than speed because, through scanning and other methods, attackers are able to find the weakest link in the chain and then quickly move laterally within the organization. …

This comment brought back some thoughts I had when I initially read the 99.9% statistic in the 2015 DBIR.  That number, while a bit surprising, fits the intuition most of us in the field have.  My concern, however, is that this may be interpreted as meaning the following:

“we can exclude ourselves from 99.9% of breaches by just ensuring we keep up with our patching.  After all, we should be able to meet the goal of applying patches no later than, say, 11 months after release.  Or 6  months.”

I see two problems with this thinking:

  1. Few organizations can apply EVERY patch to EVERY system.  Sometimes we consciously “exempt” systems from a patch for various business reasons, sometimes we simply don’t know about the systems or the patches.  If this is the case in your organization, and you get compromised through such a missing patch, you are part of the 99.9%.  You don’t get credit for patching 99.9%.  I wonder how many organizations in the 99.9% statistic thought they were reasonably up-to-date with patches?
  2. Outside of commodity/mass attacks, adversaries are intelligent.  If the adversary wants YOUR data specifically, he won’t slam his hands on the keyboard in exasperation because all of his year-plus old exploit code doesn’t work and then decide the job at McDonalds is a better way to make a living.  He’ll probably try some newer exploit until he finds one that works.  Or maybe not.

My point is not to diminish the importance of patching – clearly it is very important.  My point is, as with any given control, thinking that it will provide dramatic and sweeping improvements on its own is probably a fallacy.

 

 

Wisdom of Crowds and Risk Assessments

If your organization is like most, tough problems are addressed by assembling a group of SMEs into a meeting and hashing out a solution.  Risk assessments are often performed in the same way: bring “experts” into a room, brain storm on the threats and hash out an agreed-upon set of vulnerability and impacts for each.   I will leave the fundamental problems with scoring risks based on vulnerability and impact ratings for another post[1].

“None of us is as smart as all of us” is a common mantra.  Certainly, we should arrive at better conclusions through the collective work of a number of smart people.  We aren’t.  Many people have heard the phrase “the wisdom of crowds” and implicitly understood that this reinforces the value of the collaborative effort of SMEs.  It doesn’t.

The “wisdom of crowds” concept describes the phenomenon where a group of people are each biased in random directions when estimating some quantity.  When we average out the estimates of the “crowd”, the resulting average is often very close to the actual quantity.  This works with the estimates are given independently of one another.  If the “crowd” collaborates or compares ideas when estimating the quantity, this effect isn’t present.  People are heavily influenced by each other and the previously present array biases are tamped down, resulting in a estimates that reflect the group consensus and not the actual quantity being analyzed.

The oft cited example is the county fair contest where the crowd writes down his or her guess for the weight of a cow or giant pumpkin on a piece of paper, drops the paper in a box and hopes to have the closest guess to win the Starbucks gift card.  Some enterprising people have taken the box of guesses and averaged them out and determined that the average of all guesses is usually very close to the actual weight.  If, instead, the fair goers were somehow incentivized to work together so that they only had one guess, and if that guess were within, say 2 pounds of the actual weight, the entire crowd won a prize, it’s nearly a sure thing the crowd would lose every time, absent some form of cheating.

With this in mind, we should consider the wisdom of our risk assessment strategies.

[1] In the mean time, read Douglas Hubbard’s book: “The Failure of Risk Management”.

How Do We Know We’re Doing a Good Job in Information Security?

Nearly every other business process in an organization has to demonstrably contribute to the top or bottom lines.

  • What return did our advertising campaign bring in the form of new sales?
  • How much profit did our new product generate?
  • How much have we saved by moving our environment “to the cloud”?

Information security is getting a lot of mind share lately among executives and boards for good and obvious reasons.  However, how are those boards and executives determining if they have the “right” programs in place?

This reminds me of the TSA paradox…  Have freedom gropes, nudie scanners and keeping our liquids in a clear ziplock bag actually kept planes from falling out of the sky?  Or is this just random luck that no determined person or organization has really tried in recent years?

If our organization is breached, or has a less significant security “incident”, it’s clear that there is some room for improvement.  But, do no breaches mean that the organization has the right level of investment, right technologies properly deployed, right amount of staff with appropriate skills and proper processes in place?  Or is it just dumb luck?

Information security is in an even tougher spot than our friends the TSA here.  A plane being hijacked or not is quite deterministic: if it happened, we know about it, or very soon will.  That’s not necessarily the case with information security.   If a board asks “are we secure?”, I might be able to answer “We are managing our risks well, we have our controls aligned with an industry standard, and the blinky boxes are blinking good blinks.”  However, I am blind to the unknown unknowns.  I don’t know that my network has 13 different hacking teams actively siphoning data out of it, some for years.

Back to my question: how do we demonstrate that we are properly managing information security?  This is a question that has weighed on me for some time now.  I expect that this question will grow in importance as IT continues to commoditize and security threats continue to evolve and laws, regulations and fines increase, even if public outrage subsides.  Organizations only have so much money to invest in protection, and those that are able to allocate resources most effectively should be able to minimize costs of both security operations and of business impacts due to breaches.

I recently finished reading “Measuring and Managing Information Risk: A FAIR Approach”, and am currently reading “IT Security Metrics”.  Both are very useful books, and I highly recommend anyone in IT security management read them.   These are generally “frameworks” that help define how, and how not to, assess risk, compare risks and so on.  In the context of a  medium or large organization, using these tools to answer the question “are we doing the right things?” seems intuitive, however at the same time, so mind bogglingly complex as to be out of reach.  I can use these to objectively determine if I am better off investing in more security awareness training or a two factor authentication system, however it won’t inform me that I should have actually spent that extra investment on better network segmentation, since that risk wasn’t on the radar until the lack of it contributed to a significant breach.

Also, there really is no “perfect” security, so we are always living with some amount of risk associated with the investment we make.  Since our organization is only willing or able to invest so much, it explicitly or implicitly accepts some risk.  That risk being realized in the form of a breach does not necessarily mean that our management of information security was improper given the organizational constraints, just as not having a breach doesn’t mean that we ARE properly managing information security.

Without objective metrics that count the number of times we weren’t breached, how does the board know that I am wisely investing money to protect the organization’s data?

Is this a common question?  Are good leaders effectively (and responsibly) able to answer the question now?  If so, how?

 

 

Information Security and the Availability Heuristic

Researchers studying human behavior describe a trait, referred to as the availability heuristic, that significantly skews our estimation of the likelihood of certain events based on how easy or hard it is for us to recall an event, rather than how likely the event really is.

It isn’t hard to identify the availability heuristic at work out in the world: shark attacks, terror attacks, plane crashes, kidnappings and mass shootings.  All of them are vivid.  All of them occupy, to a greater or lesser extend, the news media.  The recollection of these events, usually seen through the media, will often cause people to irrationally overestimated certain risks.  For instance, the overwhelming majority, approximately 88%, of child kidnappings is perpetrated by a relative or caregiver.  However, the raw statistics regarding kidnappings, constant Amber alerts and media stories about horrible kidnapping cases is the source of much consternation for parents.  Consternation to the point that police in some jurisdictions are accusing parents who allow kids to play outside unsupervised of child neglect.  The gun debate rages on in the U.S., with mass shooting tragedies leading news reports, even though the number of people who kill themselves with a gun significantly outnumbers those murdered with a gun.

The availability heuristic causes us to worry about shark attacks, plane crashes, stranger kidnappings and mass shootings, while we are far more likely to die in car crashes, or from diabetes, or heart disease, or cancer or even of suicide, however the risks from those are generally not prominent in our minds when we think about the most important risks we, and our friends and families, face.  Maybe if, at the end of the TV news, the commentators recapped the number of car crash fatalities and heart disease fatalities, we would put better context around these risks, but probably not.  As Stalin said: “a single death is a tragedy; a million deaths is a statistic.”

How does this related to information security?

Information security programs are, at their core, intended to mitigate risks to an organization’s systems and data.  Most organizations need to be thoughtful in the allocation of their information security budgets and staff: addressing risks in some sort of prioritized order.  What, specifically, is different between the ability to assess the likelihood of information security risks as opposed to the “every day” risks described above?

Increasingly, we are bombarded by news of mega breaches and “highly sophisticated” attacks in the media.  The availability of these attacks in recollection is certainly going up as a result.  However, just like fretting about a shark attack as we cautiously lounge in a beach chair safely away from the water while eating a bag of Doritos, are we focusing on the unlikely Sony-style attack, while our data continues to bleed out through lost or stolen unencrypted drives on a daily basis?  In many cases, we do not actually know the specific mechanisms that lead to the major beaches.  Regardless, security vendors step in and tailor their “solutions” to help organizations mitigate these attacks.

Given that the use of quantitative risk analyses are still pretty uncommon, the assessment of likelihood of information risks is, tautologically, subjective in most cases.  Subjective assessment of risks are almost certainly vulnerable to the same kinds of biases described by the availability heuristic.

The availability heuristic works in both directions, too.  Available risks are over-assessed, while other risks that may actually be far more likely but not prominently recalled, are never even considered.  Often, the designers of complex IT environments appear to be ignorant of many common attacks and do not account for them in the system design or implementation. They confidently address the risks, as budget permits, that they can easily recall.

Similarly, larger scale organizational risk assessments that do not enumerate the more likely threats will most certainly lead to suboptimal prioritization of investment.

At this point, the above linkage of the availability heuristic to information security is hypothetical- it hasn’t been demonstrated objectively, though I would argue that we see the impacts of it with each new breach announcement.

I can envision some interesting experiments to test this hypothesis: tracking how well an organization’s risk assessments forecast the actual occurrence of incidents; identifying discrepancies between the likelihood of certain threats relative to the occurrence of those threats out in the world and assessing the sources of the discontinuities; determining if risk assessment outcomes are different if participants are primed with different information regarding threats, or if the framing of assessment questions result in different risk assessment outcomes.

A possible mitigation against the availability heuristic in risk assessments, if one is really needed, might be to review sources of objective threat information as part of the risk assessment process.  This information may come from threat intelligence feeds, internal incident data and reports such as the Verizon DBIR.  We have to be cognizant, though, that many sources of such data are going to be skewed according to the specific objectives of the organization that produced the information.  Reading an industry report on security breaches written by the producer of identity management applications will very likely skew toward analyzing incidents that resulted from identity management failures, or at least play up the significance of identity management failures in incidents where multiple failures were in play.

Named Vulnerabilities and Dread Risk

In the middle of my 200 mile drive home today, it occurred to me that the reason Heartbleed, Shellshock and Poodle received so much focus and attention, both within the IT community and generally in the media, is the same reason that most people fear flying: something that Gerd Gigerenzer calls “dread risk” in his book “Risk Savvy: How to Make Good Decisions”.  The concept is simple: most of us dread the thought of dying in a spectacular terrorist attack or a plane crash, which are actually HIGHLY unlikely to kill us, while we have implicitly accepted the risks of the far more common yet mundane things that will almost certainly kill us: car crashes, heart disease, diabetes and so on. (At least for those of us in the USA)

These named “superbugs” seem to have a similar impact on many of us: they are probably not the thing that will get our network compromised or data stolen, yet we talk and fret endlessly about them, while we implicitly accept the things that almost certainly WILL get us compromised: phishing, poorly designed networks, poorly secured systems and data, drive by downloads, completely off-the-radar and unpatched systems hanging out on our network, and so on.  I know this is a bit of a tortured analogy, but similar to car crashes, heart disease and diabetes, these vulnerabilities are much harder to fix, because addressing them requires far more fundamental changes to our routines and operations.  Changes that are painful and probably expensive.  So we latch on to these rare, high-profile named-and-logo’d vulnerabilities that show up on the 11 PM news and systematically drive them out of our organizations, feeling a sense of accomplishment once that last system is patched.  The systems that we know about, anyhow.

“But Jerry”, you might be thinking, “all that media focus and attention is the reason that everything was patched so fast and no real damage was done!”  There may be some truth to that, but I am skeptical…

Proof of concept code was available for Heartbleed nearly simultaneous to it’s disclosure.  Twitter was alight with people posting contents of memory they had captured in the hours and days following.  There was plenty of time for this vulnerability to be weaponized before most vendors even had patches available, let alone implemented by organizations.

Similarly, proof of concept code for Shellshock was also available right away.  Shellshock, in my opinion and in the opinion of many others, was FAR more significant than Heartbleed, since it allowed execution of arbitrary commands on the system being attacked, and yet there has only been one reported case of an organization being compromised using Shellshock – BrowserStack.  By the way, that attack happened against an old, unpatched dev server that hadn’t been patched for quite some time after ShellShock was announced.  We anecdotally know that there are other servers out on the Internet that have been impacted by ShellShock, but as far as anyone can tell, these are nearly exclusively all but abandon web servers.   These servers appear to be subscribed to botnets for the purposes of DDOS.  Not great, but hardly the end of the world.

And then there’s Poodle.  I don’t even want to talk about Poodle.  If someone has the capability to pull off a Poodle attack, they can certainly achieve whatever end far easier using more traditional methods of pushing client-side malware or phishing pages.

The Road To Breach Hell Is Paved With Accepted Risks

As the story about Sony Picture Entertainment continues to unfold, and we learn disturbing details, like the now infamous “password” directory, I am reminded of a problem I commonly see: assessing and accepting risks in isolation and those accepted risks materially contributing to a breach.

Organizations accept risk every day. It’s a normal part of existing. However, a fundamental requirement of accepting risk is understanding the risk, at least to some level. In many other aspects of business operations, risks are relatively clear cut: we might lose our investment in a new product if it flops, or we may have to lay off newly hired employees if an expected contract falls through. IT risk is a bit more complex, because the thing at risk is not well defined. The apparent downside to a given IT tradeoff might appear low, however in the larger context of other risks and fundamental attributes of the organization’s IT environment, the risk could be much more significant.

Nearly all major man-made disasters are the result of a chain of problems that line up in such a way that allows or enables the disaster and not the result of a single bad decision or bad stroke of luck. The most significant breaches I’ve witnessed had a similar set of weaknesses that lined up just so. Almost every time, at least some of the weaknesses were consciously accepted by management. However, managers would almost certainly not have made such tradeoff decisions if they understood that their decision could have lead to such a costly breach.

The problem is compounded when multiple tradeoffs are made that have no apparent relationship with each other, yet are related.

The message here is pretty simple: we need to do a better job of conveying the real risks of a given tradeoff, without overstating them, so that better risk decisions can be made. This is HARD. But it is necessary.

I’m not proposing that organizations stop accepting risk, but rather that they do a better job of understanding what risks they are actually accepting, so management is not left saying: “I would not have made that decision if I knew it would result in this significant of a breach.”

Compliance Isn’t Security, or Why It’s Important To Understand Offensive Techniques

The security vs. compliance debate continues, though I’ve lost track of who is actually  still arguing that compliance is security.  Maybe it’s auditors?  Or managers?  This topic comes up a lot for me, both at work and from listeners of my security podcast.

The “compliance” mindset often leads to a belief that if some particular scenario is not prohibited by policy, it must be safe.  After all, if it weren’t safe, it would be explicitly addressed in policy.

I can look around and see compliance thinking behind many breaches.: “If it isn’t safe for card readers to pass unencrypted credit card information to a POS terminal, surely PCI DSS would not permit it!”

For many in the information security field, this is a quaint discussion.  In my advanced age, I’m becoming more convinced that “compliance” is not just a fun philosophical debate, with auditors, but rather an active contributor to the myriad security issues we wrestle with.

Clearly, the better path to take is one of assessing risks in IT systems and addressing those risks.  However, we must keep in mind that we are often asking for that assessment to be performed by the same people who are apt to accept policy as the paragon of protection.

As we debate the perils of focusing too much on offensive security and not enough on defense, it’s important to point out that, without understanding offensive techniques, our imaginations are hobbled as we evaluate risks to IT systems.

For example it probably seems reasonable to the average IT person to connect an Internet-facing Windows web server to her organization’s Active Directory domain.  There is no policy or regulations that prohibit such a thing.  In fact, AD provides many great security capabilities, like the ability to centrally provision and remove user IDs.  Without the context of how this situation can, and indeed often is, leveraged by attackers in significant breaches, the perceived benefits outweigh the risks.

I see value in exposing general IT workers, not just information security personnel, to offensive techniques.  I am not advocating that we teach a Windows administrator how to perform code injection into running processes, but rather that code CAN be injected into running processes.  And so on.

To me. this is lack of understanding is a major contributor to the issues underlying the “security vs. compliance” discussion.  Organizations spend a quite a lot of effort on security awareness programs for employees, but I see almost no focus on educating IT staff on higher order security threats.  I don’t expect much to change until we change this state of affairs.

Human Nature and Cyber Security

This has been a particularly active year for large scale, public breaches in the news. Next year’s Data Breach Investigations Report from Verizon should provide some context on whether we are experiencing a “shark attack” phenomenon of continued media coverage of each new breach, or if this is really an exceptional year.

Regardless of whether we are trending above average or not, it’s pretty clear that a lot of companies are experiencing data breaches.

Information security is a series of trade-offs: investment vs. security, ease of use vs. security, operational costs vs. security and so on.  This isn’t a new or revolutionary concept.  Groups like SIRA focus on higher order efforts to quantify information risk to inform security strategy, justify investment in security programs and so on.

At a lower level, making intelligent decisions on the trade-offs involved in IT systems projects requires a well-informed assessment of the risks involved.  However, experiments in cognitive psychology and behavioral economics consistently demonstrate that humans have a raft of cognitive biases which impact decision making.  For instance, we are generally overconfident in our knowledge and abilities and we tend to think about likelihood in the context of what we have had personal experience with.  Uncertainty, inexperience or ignorance into exactly how IT system security can fail may lead to an improper assessment of risk.  If risks are not clearly understood, decisions made using these assessments will not be as accurate as expected.

Douglas Hubbard writes extensively on the topic of “expert calibration” in his book “How To Measure Anything”.  In this book, calibration involves training experts to more clearly understand and articulate their level of uncertainty when making assessments of likelihoods or impacts of events.  While it doesn’t eliminate error from subjective assessments, Mr. Hubbard claims that it demonstrably improves estimates provided by calibrated experts.  This calibration process likely makes these “experts” more aware of their cognitive biases.  Regardless of the exact mechanism, measurably improving estimates used in decision making is a good thing.

Information security could benefit from a similar calibration concept.  Understanding the mechanisms through which IT systems can be breached underpins our ability to make reasonable assessments about the risks and likelihood of a breach in a given environment.

To pick on Target for a minute:

Would having a clear understanding of the mechanisms by which the external vendor application change the decision to have the server authenticate against the company’s Active Directory system?  An application to coordinate the activities of the myriad vendors a company the size of Target has is almost certainly a necessity, but would a better understanding of the ways that a vendor management server could be exploited have made a case to have the application isolated from the rest of the Target network with the tradeoff of higher operational costs?  Clearly, that question can only be answered by those present when the decision was made.

Daniel Kahneman, in his book “Thinking, Fast and Slow”, describes a cognitive bias he call the availability heuristic. Essentially this idea posits that people judge concepts and likelihoods based on their ability to recall something from memory, and if it can’t be recalled, it is not important. Similarly, Thomas Schelling, a Nobel Prize-winning economist wrote:

There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

Nate Silver’s book “The Signal and the Noise” has an excellent chapter on this concept (Chapter 13).

To become calibrated experts who can clearly assess security risks arising from systems, the IT industry seemingly would benefit from a more broad understanding of the methods used to penetrate systems and networks.  Certainly this will not “solve” the problem of breaches, however it should help to make better inform decisions regarding IT security tradeoffs.

Nor does this mean that organizations will or should always choose the least risky or most secure path.  Businesses have to deal with risk all the time and often have to accept risk in order to move forward.  The point here is that organizations are often seemingly not fully cognizant of risks they accept when making IT decisions, due to human biases, conflicts and ignorance.

A popular blog post by Wendy Nather recently pushed back on the offensive security effort; pointing out that things will not get better by continuing to point out what is wrong.  Rather, the way forward is to start fixing things.  My view is that both the offensive and defensive sides are important to the security ecosystem.  Certainly things will NOT get better until we start fixing them.  However, “we” is a limited population.  To tackle the fundamental problems with security, we need to engage the IT industry – not just those people with “security” in their titles.  And we need those that do have “security” in their titles to be more consistently aware of threats.  Focusing solely on defense, as this blog post urges, will yield some short term improvements in some organizations.  However, building consistent awareness of IT security risks, particularly in those people responsible for assessing such risks, should help all organizations not be surprised when Brian Krebs calls them up with unfortunate news.

How Bad Administrator Hygiene Contributes To Breaches

I recently wrote about the problems associated with not understanding common attack techniques when designing an IT environment.  I consistently see another factor in breaches: bad hygiene.  This encompasses things such as:

  • Missing patches
  • Default passwords
  • Weak passwords
  • Unmanaged systems
  • Bad ID management practices

My observation is that, at least in some organizations, many of these items are viewed as “compliance problems”.  Administrators don’t often see the linkage between bad hygiene and security breaches. For the most part, these hygiene problems will not enable an initial compromise, though they certainly do from time to time.  What I see much more frequently is that some unforeseen mechanism results in an initial intrusion, such as SQL injection, spear phishing or file upload vulnerability, and the attacker then leverages bad administrator hygiene to move more deeply in an environment.

Most man-made disasters are not the product of a single problem, but rather a chain of failures that line up just right.  In the same way, many breaches are not the result of a single problem, but rather a number of problems that an attacker can uncover and exploit to move throughout an organization’s systems and ultimately accomplish their objective.

It’s important for network, server, application and database administrators to understand the implications of bad hygiene. Clearly, improving awareness doesn’t guarantee better diligence by those administrators.   However, drawing a more clear linkage between bad hygiene and their security consequences, rather than simply raising the ire of some auditors for violating a nebulous policy, should make some amount of improvement.  That is my intuition, anyhow.

Security awareness is a frequently discussed topic in the information security world. Such training is almost exclusively thought of in the context of training hapless users on which email attachments to not open.  Maybe it’s time to start educating the IT department on contemporary security threats and attacker tactics so that they can see the direct connection between their duties and the methods of those attackers.

Threat Modeling, Overconfidence and Ignorance

Attackers continue to refine their tools and techniques and barely a day goes by without news of some significant breach.  I’ve noticed a common thread through many breaches in my experience of handling dozens of incidents and researching many more for my podcast: the organization has a fundamental misunderstanding risks associated with the technology they have deployed and, more specifically, the way in which they deployed it.

When I think about this problem, I’m reminded of Gene Kranz’s line from Apollo 13: “I don’t care what anything was designed to do.  I care about what it can do.”

My observation is that there is little thought given to how things can go wrong with a particular implementation design.  Standard controls, such as user ID rights, file permissions, and so on, are trusted to keep things secure.  Anti-virus and IPS are layered on as supplementary controls, and systems are segregated onto functional networks with access restrictions, all intended to create defense in depth.  Anyone familiar with the technology at hand and who is moderately bright can cobble together what they believe is a robust design.  And the myriad of security standards will tend to back them up, by checking the boxes

  • Firewalls in place? Check!
  • Software up-to-date? Check!
  • Anti-Virus installed and kept up-to-date? Check!
  • User ID’s properly managed? Check!
  • Systems segregated onto separate networks as needed? Check!

And so on.  Until one fateful day, someone notices by accident, a Domain Admin account that shouldn’t be there.  Or a call comes in from the FBI about “suspicious activity” happening on the organization’s network.  Or the Secret Service calls to say that credit cards the organization processed were found on a carder forum.  And it turns out that many of the organization’s servers and applications have been compromised.

In nearly every case, there were a mix of operational and architectural problems that contributed to the breach.  However, the operational issues seem to be transitive: maybe it’s poorly written ASP.net code that allows file uploads, or maybe someone used Password1 as her administrator password, and so on.  But the really serious contributor to the extent of a breach is architectural problems.  This involves things like:

  • A web server on an Internet DMZ making calls to a database server located on an internal network.
  • A domain controlled on an Internet DMZ with 2 way access to other DC’s on other parts of the internal network.
  • Having a mixed Internet/internal server DMZ, where firewall rules govern what is accessible from the Internet.

…And so it goes.  The number of permutations of how technology can be assembled seems nearly infinite.  Without an understanding of how the particular architecture proposed or in place can be leveraged by an attacker, organizations are ignorant of the actual risk to their organization.

For this reason, I believe it is important that traditional IT architects responsible for developing such environments have at least a conceptual understanding of how technology can be abused by attackers.  Threat modeling is also a valuable activity to uncover potential weaknesses, however doing so still requires people who are knowledgeable about the risks.

I also seem some value in establishing common “design patterns”, similar to that seen in programming, but at a much higher level, involving networked systems and applications, where well thought out designs could be starting point for tweaking, rather than starting from nothing and trying to figure out the pitfalls with the new design along the way.  I suspect that would be difficult at best, given the extreme variability in business needs, technology choices and other constraints.