Cyber Introspection: Look at the Damn Logs

I was talking to my good friend Bob today about whatever came of Dick Cheney’s weather machine when he interrupted with the following question:

“Why, as an community, are we constantly seeking better security technology when we aren’t using what we have?”

Bob conveyed the story of a beach response engagement he worked on for a customer involving a compromised application server.  The application hadn’t been patched in years and had numerous vulnerabilities for anyone with some inclination to exploit.  And exploited it was.  The server was compromised for months prior to being detected.

The malware dropped on the server for persistence and other activities was indeed sophisticated.  There was no obvious indication that the server had been compromised.  System logs were cleared from the time of the breach and subsequent logs had nothing related to the malicious activity on the system.

A look at the logs from a network IDS sensor which monitors the network connecting the server to the Internet showed nearly no alerts originating from that server until the suspected date of the intrusion, as determined by forensic analysis of the server.  On that day, the IDS engine started triggering many, many alerts as the server was attempting to perform different activities such as scanning other systems on the network.

But no one was watching the IDS alerts.

The discussion at the client quickly turned to new technologies to stop such attacks in the future and to allow fast reaction if another breach were to happen.

But no one talked about more fully leveraging the components already in place, like IDS logs.  IDS is an imperfect system that requires care and feeding (people); clearly an inferior option when compared to installing a fancy advanced attack.

I previously wrote a similar post a while back regarding AV logs.

Why are we so eager to purchase and deploy yet more security solutions, which are undoubtedly imperfect and also undoubtedly requires resources to manage, when we are often unable to get full leverage from the tools we already have running?

Maybe we should start by figuring out how to properly implement and manage our existing IT systems, infrastructure and applications.  And watch the damn logs.

Information Security and the Availability Heuristic

Researchers studying human behavior describe a trait, referred to as the availability heuristic, that significantly skews our estimation of the likelihood of certain events based on how easy or hard it is for us to recall an event, rather than how likely the event really is.

It isn’t hard to identify the availability heuristic at work out in the world: shark attacks, terror attacks, plane crashes, kidnappings and mass shootings.  All of them are vivid.  All of them occupy, to a greater or lesser extend, the news media.  The recollection of these events, usually seen through the media, will often cause people to irrationally overestimated certain risks.  For instance, the overwhelming majority, approximately 88%, of child kidnappings is perpetrated by a relative or caregiver.  However, the raw statistics regarding kidnappings, constant Amber alerts and media stories about horrible kidnapping cases is the source of much consternation for parents.  Consternation to the point that police in some jurisdictions are accusing parents who allow kids to play outside unsupervised of child neglect.  The gun debate rages on in the U.S., with mass shooting tragedies leading news reports, even though the number of people who kill themselves with a gun significantly outnumbers those murdered with a gun.

The availability heuristic causes us to worry about shark attacks, plane crashes, stranger kidnappings and mass shootings, while we are far more likely to die in car crashes, or from diabetes, or heart disease, or cancer or even of suicide, however the risks from those are generally not prominent in our minds when we think about the most important risks we, and our friends and families, face.  Maybe if, at the end of the TV news, the commentators recapped the number of car crash fatalities and heart disease fatalities, we would put better context around these risks, but probably not.  As Stalin said: “a single death is a tragedy; a million deaths is a statistic.”

How does this related to information security?

Information security programs are, at their core, intended to mitigate risks to an organization’s systems and data.  Most organizations need to be thoughtful in the allocation of their information security budgets and staff: addressing risks in some sort of prioritized order.  What, specifically, is different between the ability to assess the likelihood of information security risks as opposed to the “every day” risks described above?

Increasingly, we are bombarded by news of mega breaches and “highly sophisticated” attacks in the media.  The availability of these attacks in recollection is certainly going up as a result.  However, just like fretting about a shark attack as we cautiously lounge in a beach chair safely away from the water while eating a bag of Doritos, are we focusing on the unlikely Sony-style attack, while our data continues to bleed out through lost or stolen unencrypted drives on a daily basis?  In many cases, we do not actually know the specific mechanisms that lead to the major beaches.  Regardless, security vendors step in and tailor their “solutions” to help organizations mitigate these attacks.

Given that the use of quantitative risk analyses are still pretty uncommon, the assessment of likelihood of information risks is, tautologically, subjective in most cases.  Subjective assessment of risks are almost certainly vulnerable to the same kinds of biases described by the availability heuristic.

The availability heuristic works in both directions, too.  Available risks are over-assessed, while other risks that may actually be far more likely but not prominently recalled, are never even considered.  Often, the designers of complex IT environments appear to be ignorant of many common attacks and do not account for them in the system design or implementation. They confidently address the risks, as budget permits, that they can easily recall.

Similarly, larger scale organizational risk assessments that do not enumerate the more likely threats will most certainly lead to suboptimal prioritization of investment.

At this point, the above linkage of the availability heuristic to information security is hypothetical- it hasn’t been demonstrated objectively, though I would argue that we see the impacts of it with each new breach announcement.

I can envision some interesting experiments to test this hypothesis: tracking how well an organization’s risk assessments forecast the actual occurrence of incidents; identifying discrepancies between the likelihood of certain threats relative to the occurrence of those threats out in the world and assessing the sources of the discontinuities; determining if risk assessment outcomes are different if participants are primed with different information regarding threats, or if the framing of assessment questions result in different risk assessment outcomes.

A possible mitigation against the availability heuristic in risk assessments, if one is really needed, might be to review sources of objective threat information as part of the risk assessment process.  This information may come from threat intelligence feeds, internal incident data and reports such as the Verizon DBIR.  We have to be cognizant, though, that many sources of such data are going to be skewed according to the specific objectives of the organization that produced the information.  Reading an industry report on security breaches written by the producer of identity management applications will very likely skew toward analyzing incidents that resulted from identity management failures, or at least play up the significance of identity management failures in incidents where multiple failures were in play.

Cyber Security Lessons From Behavioral Economics

In this series, I am exploring the intersection of information security and behavioral economics.  As a long time information security person that recently began studying behavioral economics, I’ve come to realize that much of traditional information security programs are built using standard economic models.

For example, the Simple Model of Rational Crime (SMOC) has implicitly influenced the creation of security policies and conduct guidelines, as well as much of criminal law.  Simply put, SMOC takes the traditional economic view of human decisions as it pertains to maximizing utility when in comes to committing crimes: people perform a cost-benefit calculation and decide whether or not to commit the crime.

We have four levers to push and pull on as it pertains to managing employee threat:

  1. The explicitness of the requirements to ensure employees understand their obligations and can’t hide behind ignorance
  2. The severity of punishment, such as getting fired or even sued by the company
  3. Controls that increase the likelihood that misdeeds are detected
  4. Controls that prevent misdeeds from occurring

Many corporate security programs rely heavily on the first three levers assuming that if people clearly understand the expectations of them, clearly understand the consequences and have some expectation that they’ll be caught, employees will make economically rational choices after weighing the cost-benefit of whatever opportunistic misdeed lays in front of them.  It’s hard to consider the possibility that a sane person would choose to risk their well paying job for a few hundred dollars or to cut a corner that saves them a few minutes.  Anyone who would do such a thing must, by definition, not be of sound mind and therefore isn’t really good for the company.  Right?

But this scenario happens all the time.  Our policies and expectations are built on the understanding that people are indeed rational and make rational cost-benefit assessments before taking an action.   A growing body of research points out that people are influenced by a great many things, from their mood to project deadlines to how tired they are.  We don’t like level number four, because it’s expensive, inconvenient and we shouldn’t have to do it anyhow given the above conditions.  But we should reconsider.

Dan Ariely’s book “The Honest Truth About Dishonesty” details many experiments that illustrate how the SMOC model doesn’t represent the actual behavior of people and is well worth a read for anyone responsible for designing security programs or security awareness training.

The take away for this post is that relying on employees “to do the right thing” as an integral part of a security program doesn’t make sense given what we know about the human mind.  As mentioned in the previous post, reminders about honesty can help in some cases, but not in all.  The integrity of key processes should not rely solely on policy and employment agreements, but rather be designed to prevent, or at least quickly detect, employee misdeeds.  Such controls clearly won’t work for all organizations or in all circumstances due to cost constraints, politics, technological limitations and so on, but we need to be clear about what to expect when those controls are absent.  Too many organizations are surprised when an employee violates policy, despite the policy being explicit on expectations, explicit on the ramifications of violating the policy and despite an elaborate security awareness campaign.

 

Cyber Security and Behavioral Science

I recently read a post about improving security awareness using lessons from behavioral science.  The field of behavioral economics and its intersection with information security has been a growing interest of mine, and the post I mentioned inspired me to start a series of posts, starting with this one, on the myriad opportunities there are to leverage the lessons of behavioral economics in improving information security programs.

Behavioral economics describes a set of nuances, biases and irrationalities in the way people, on average, thing.  This does not mean that every single person will be influenced using these techniques.  Also to be clear, these are my hypotheses and I do not mean to represent them as fact.  This is intended to be an exploration of the linkage between behavioral economics and information security, to drive discussion and to refine my thinking on the matter.

Insider Threats – The Ten Commandments

According to Dan Ariely’s research described in his book “Predictably Irrational”, a group of people who are asked to recite the Ten Commandments, regardless of whether or not they remember all 10, prior to performing a task intended to incite cheating don’t cheat.  Likewise, people do not cheat after signing a form in which they promise to abide by an honor code – an honor code that doesn’t really exist.

Ariely’s research found that people who are not asked to recite commandments or sign a honor code generally cheat when given the opportunity to do so, but they do not cheat to the full extent they could have.  But if people begin thinking about honesty just before the point of temptation, they stop cheating completely.  These effects don’t last long, however, and people must be reminded.

How can we apply this finding to information security?

1. If we put people in a position where cheating or stealing is possible, some number are going to do it.  It’s apparently human nature.  The threat of getting caught and losing one’s livelihood often doesn’t enter into the equation.  Implement controls that affirmatively prevent cheating where possible.

2. Remind people about being honest at points where they have the opportunity to cheat or steal.  A once a year conduct reminder isn’t sufficient.  For example, an on screen reminder that it’s to be dishonest when completing an expense report form.  Be careful, though, some research points out that people become blind to on screen warning messages over time.  Possibly something more subtle in the background, stating that employees of the company are known for their honesty.

 

Certainty, Cybersecurity and an Attribution Bonus

In “Thinking Fast And Slow”, Daniel Kahneman describes a spectrum of human irrationalities, most of which appear to have significant implications for the world of information security.  Of particular note, and the focus of this post, is the discussion on uncertainty.

Kahenman describes that people will generally seek out others who claim certainty, even when there is no real basis for expecting someone to be certain.  Take the example of a person who is chronically ill.  A doctor who says she does not know the cause of the ailment will generally be replaced by a doctor who exhibits certainty about the cause.  Other studies have shown that the doctor who is uncertain is often correct, and the certain doctor is often incorrect, leading to unnecessary treatments, worry, and so on.  Another example Kahneman cites is the CFO of companies.  CFO’s are revered for their financial insight, however they are, on average, far too certain about things like the near term performance of the stock market.  Kahneman also points out that, just as with doctors, CFOs are expected to be certain and decisive, and not being certain will likely cause both doctors and CFOs to be replaced.  All the while the topic each is certain about is really a random process, or such a complicated process containing so many unknown and unseen influencing variables as to be indistinguishable from randomness.

Most of us would be rightly skeptical about someone who claims to have insight into the winning numbers of an upcoming lottery drawing, and would have little sympathy when that person turns out to be wrong.  However, doctors and CFOs have myriad opportunity to highlight important influencing variables that weren’t known when their prediction was made.  These variables are what make the outcome of the process random in the first place.

The same dichotomy regarding irrational uncertainty of random processes appears to be at work in information security as well.  Two examples are the CIO who claims that an organization is secure, or at least would be secure if she had an additional $2M to spend, and the forensic company that attributes an attack on a particular actor – often a country.

The CIO, or CISO, is in a particularly tough spot.  Organizations spend a lot of money on security and want to know whether or not the company remains at risk.  A prudent CIO/CISO will, of course, claim that such assurances are hard to give, and yet that is the mission assigned to them by most boards or management teams.  They will eventually be expected to provide that assurance, or else a new CIO/CISO will do it instead.

The topic of attribution, though, seems particularly interesting.  Game theory seems to have a strong influence here.  The management of the breached entity wants to know who is responsible, and indeed the more sophisticated the adversary appears to be, the better the story is.  No hacked company would prefer to report that their systems were compromised by a bored 17 year old teaching himself to use Metasploit over the adversary being a sophisticated, state-sponsored hacking team the likes of which are hard, neigh impossible for an ordinary company to defend against.

The actors themselves are an intelligent adversary, generally wanting to shroud their activities with some level of uncertainty.  We shouldn’t expect that an adversary will not  mimic other adversaries, reuse code, fake timezones, change character sets, incorporate cultural references, and so on, of other adversaries in an attempt to deceive.  These kinds of things add only marginal additional time investment to a competent adversary.  As well, other attributes of an attack, like common IP address ranges, common domain registrars and so on, may be common between adversaries for reasons other than the same actor is responsible, such as that of convenience or, again, an intentional attempt to deceive.  Game theory is at play here too.

But, we are certain that the attack was perpetrated by China.  Or Russia. Or Iran. Or North Korea. Or Israel.  We discount the possibility that the adversary intended for the attack to appear as it did.  And we will seek out organizations that can give us that certainty.  A forensic company that claims the indicators found in an attack are untrustworthy and can’t be relied upon for attribution will most likely not have many return customers or referrals.

Many of us in the security industry mock the attribution issue with dice, an magic 8-ball and so on, but the reality is that it’s pervasive for a reason: it’s expected, even if it’s wrong.

 

Applying Science To Cyber Security

How do we know something works?  The debate about security awareness training continues to drag on, with proponents citing remarkable reductions in losses when training is applied and detractors pointing out that training doesn’t actually stop employees from falling victim to common security problems.  Why is it so hard to tell if security awareness training “works”?  Why do we continue to have this discussion?

My view is, as I’ve written previously, cyber security is an art, not a science.  We collectively “do stuff” because we think it’s the right thing to do.  One night last week, over dinner I was talking to my friend Bob, who works for a large company.  His employer recently performed a phishing test of all employees after each receiving training on identifying and avoiding phishing emails. Just over 20% of all employees fell for the test after being trained.  I ask Bob how effective his company found the training was at reducing the failure rate.  He didn’t know, since there wasn’t a test performed prior to the training.  That’s a significant opportunity to gain insight into the value of training lost.

Bob’s company spent a considerable amount of money on the training and the test, but they don’t know if the training made a difference, and if so, by how much.  Would 60% of employees had fallen for the phishes prior to training?  If so, that would likely indicate the training was worthwhile.  Or would only 21% have fallen for it, and the money spent on the training would have been much better spent on some other program to address the risks associated with phishing?  Should Bob’s employer run the training again this year?  If they do, at least they will be able to compare the test results to last year’s results and hopefully derive some insight into the effectiveness of the program.

But that is not the end of the story.  We do not have only two options available to us: to train or not to train.  There are many, many variations, on the content of the training, the delivery mechanism, the frequency, and the duration, to name a few.  Security awareness training seems to be a great candidate for randomized control tests.  Do employees who are trained cause less security related problems than those who are not trained? Are some kinds of training more effective than other kinds of training? Do some kinds of employees benefit from training or specific types of training more than other types of employees?  Is the training effective against some kinds of attacks and not others, indicating that the testing approach should be more comprehensive?

I don’t know because we either don’t do this kind of science, or we don’t talk about it if we are doing it.  Instead, we impute benefits from tangentially related reports and surveys interpreted by vendors who are trying to impart the importance of having a training regiment, or by vendors who are trying to impart the importance of a technical security solution.

My own view by the way, which is fraught with biases but based on experience, is that security awareness training is good for reducing the frequency of, but not eliminating, employee-induced security incidents.  Keeping this in mind serves two important purposes:

  1. We understand that there is significant risk which must be addressed despite even the best security training.
  2. When an employee is the victim of some attack, we don’t fall into the trap of assuming the training was effective and the employee simply wasn’t paying attention or chose to disregard the training delivered.

We wring our hands about so many aspects of security: how effective is anti-virus and is it even worth the cost, given it’s poor track record? Does removing local administrator rights really reduce the instances of security incidents?  How fast do we need to patch our systems?

These are all answerable questions.  And yes, the answers often rely at least in part on specific attributes of the environment they operate in.  But we have to know to ask the questions.

What Happens When Most Attackers Operate As An APT?

I’ve been concerned for some time about the rate at which offensive tactics are developing, spurred by the dual incentives of financial gain by criminals and information gathering by government military, intelligence and law enforcement agencies and their contractors.

I find it hard to imagine, in this day of threat intelligence, information sharing, detailed security vendor reports on APT campaigns and other criminal activities, that criminals are not rapidly learning best practices for intrusion and exfiltration.

And indeed Mandiant’s recently released 2015 M-Trends report identifies trend 4 as: “BLURRED LINES—CRIMINAL AND APT ACTORS TAKE A PAGE FROM EACH OTHERS’ PLAYBOOK”, which describes the ways Mandiant observed criminal and governmental attackers leveraging each other’s tools, tactics and procedures (TTPs) in incidents they investigated.

I see this as bad news for the defense.  Adversaries are evolving their TTPs much more rapidly than our defensive capabilities are maturing.

Something has to change.

“Cyber security” is still largely viewed as an add-on to an IT environment: adding in firewalls, anti-virus, intrusion prevention advanced malware protection, log monitoring, and so on.  All of which has dubious effectiveness, particularly in the face of more sophisticated attacks.  We need a new approach.  An approach that recognizes the limitations of information technology components, and designs IT environments, from the ground up, to be more intrinsically secure and defensible.

A way to get there I believe, is for IT architects, not just security architects, to maintain awareness of offensive tactics and trends over time.  This way, those architects have a healthy understanding of the limitations of the technology they are deploying, rather than making implicit assumptions about the “robustness” of a piece of technology.

As defenders, we often have our hands full with “commodity” attacks using very basic TTPs.  We need to dramatically improve our game to face what is coming.

 

Ideas For Defending Against Watering Hole Attacks For Home Users

In episode 106, we discussed a report detailing an attack that leveraged the Forbes.com website to direct visitors to an exploit kit and subsequently infect certain designated targets in the defense and financial industries using two zero day vulnerabilities.  A number of people have asked me for ideas on how to defend against this threat from the perspective of a home user, so I thought it best to write a blog post about it.   Just a heads up: this is aimed at Windows users.

One of the go-to mitigations for the defending against drive by web browser style attacks are ad blockers, like AdBlock Plus.  In the Forbes instance, it isn’t clear whether an ad blocker would have helped, since the malicious content may not have originated from an ad network, and instead was added through a manipulation of the Forbes site itself to include content from the exploit-hosting site.  Many of the targeted watering hole attacks commonly alter the web site itself.  Regardless, recent reports indicate AdBlock Plus accepts payment from ad networks in return for allowing ads through.  I would not consider ad blocking a reasonable protection in any instance.

A much more effective, however more painful, avenue is NoScript, however NoScript is a FireFox plugin, and I’ve not found great plugins that work as well for Chrome, IE or Opera.  With some fiddling, NoScript can provide a reasonable level of protection from general web content threats while mostly keeping your sanity intact.  Mostly.  You will probably not want to install NoScript on your grandparents’ computer.  NoScript can be a blunt instrument, and if the user is not diligent, will likely opt to simply turn it off, at which point we are back where we started from.

Running Flash and Java are like playing with matches in a bed of dry hay.  NoScript certainly helps, but it’s not a panacea.  For most people, the Java browser plugin should be disabled.  Don’t worry, you can still play Minecraft without the plugin.  By the way, every time you update Java, the plugin is re-installed and re-enabled.  Flash… Well, use NoScript to limit where Flash scripts come from to those you really need.

Browsing using a Windows account that does not have administrator rights also mitigates a lot of known browser exploits.  To do this,  create a wholly separate user account which does not have administrator rights and use that unprivileged account for general use, logging out or using UAC (requiring the username and password of the ID that has administrator rights) to perform tasks that require administrator rights.  It’s important that you use a separate account, even those UAC gives the illusion that administrative operations will always prompt for permission to elevate authority when you are using an account with administrator rights.  UAC was not designed to be a security control point, though.  This might be a hassle that home users may not find palatable or be disciplined enough to stick with, however it is effective at blocking many common attacks.

Finally, using Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) will block many exploit attempts, and is definitely worth installing.  The default policy is pretty effective in the latest versions of EMET.  The configuration can be tweaked to protect other applications not in the default policy, but doing so will require some testing, since some of these protections can cause applications to crash if they were not built with those settings in mind.

Finally, a web filter such as Blue Coat K9 can help prevent surreptitious connections to malicious web servers hosting exploit kits, so long as the site is known malicious.

Remarkably, anti-virus didn’t make the list.  Yes, it needs to be installed and kept up to date, but don’t count on it to save you.

One additional thought for those who are really adventurous: install VirtualBox or use HyperV to install Windows or Linux in a virtual machine and use the browser in the virtual machine.  I’ll write a post on the advantages of doing this sometime in the future.

Do you have other recommendations?  Leave a comment!

Human Nature And Selling Passwords

A new report by Sailpoint indicating that one in seven employees would sell company passwords for $150 is garnering a lot of news coverage in the past few days.  The report also finds that 20% of employees share passwords with coworkers.  The report is based on a survey of 1,000 employees from organizations with over 3,000 employees.  It isn’t clear whether the survey was conducted using statistically valid methods, so we must keep in mind the possibility for significant error when evaluating the results.

While one in seven seems like an alarming number, what isn’t stated in the report is how many would sell a password for $500 or $1,000.  Not to mention $10,000,000.  The issue here is one of human nature.  Effectively, the report finds that one in seven employees are willing to trade $150 for a spin of a roulette wheel where some spaces result in termination of employment or end his or her career.

Way back in 2004, an unscientific survey found that 70% of those surveyed would trade passwords for a chocolate bar, so this is by no means a new development.

As security practitioners, this is the control environment we work in.  The problem here is not one of improper training, but rather the limitations of human judgement.

Incentives matter greatly.  Unfortunately for us, the potential negative consequences associated with violating security policy, risking company information and even being fired are offset by more immediate gratification: $150 or helping a coworker by sharing a password.  We shouldn’t be surprised by this: humans sacrifice long term well being for short term gain all the time, whether smoking, drinking, eating poorly, not exercising and so on.  Humans know the long term consequences of these actions, but generally act against their own long term best interest for short term gain.

We, in the information security world, need to be aware of the limitations of human judgement.  Our goal should not be to give employees “enough rope to hang themselves”, but rather to develop control schemes that accommodate limitations of human judgement.  For this reason, I encourage those in the information security field to become familiar with the emerging studies under the banner of cognitive psychology/behavioral economics.  Better understanding the “irrationalities” in human judgement, we can design better incentive systems and security control schemes.

Cyber Security As A Science

Dan Geer wrote an essay for the National Science Foundation on whether Cyber Security can be considered a science.  The short version is this: what constitutes a “science” is somewhat loose, however based on some commonly held dimensions, cyber security is not yet a science, and most likely could be considered a proto-science.  Mr. Geer’s essay is worth reading for yourself, since there is far more nuance than this post will cover.

Similarly, Alex Hutton has also stated in some previous talks that information security is something of a trade craft and not a science.  Information security, cyber security, or whatever moniker we want to assign it, does indeed seem to be more of a trade craft than a science or engineering discipline.

Mr. Geer’s essay points out a few unique challenges in cyber space relative to other scientific disciplines: a major part of the “thing” being modeled is sentient adversaries that can adapt, learn and deceive, and also that the rapid evolution of technology.

There seem to be other confounding factors as well: the “constituent components” of cyber security are arbitrary and implemented in wildly different fashions by different people and organizations with different levels of skill and incentives, to different specifications, with non-obvious defects, and so on.  Translating just a slice of the challenges in cyber security to civil engineering would yield that some timbers used in construction might objectively look similar but have hidden flaws that manifest under certain circumstances, placing a structure’s integrity at risk.  The flaws with the timber are not apparent and not easily detectable without incurring extraordinary expense, and even so, not all flaws are likely to be uncovered.

With respect to technology producers, the “building materials” we have to work with in information technology are flawed in many ways, most of which are unseen.  With respect to the implementers of technology, the ways in which systems are architected and implemented are generally arbitrary, utilitarian and do not in, any appreciable way, reflect the uncertainty inherent in the technology being used.

If timbers were so structurally flawed, civil engineering, building codes, architecture, engineering and so on would need to accommodate for the uncertainty that comes with building a structure that relies on such timbers.  Information technology very inconsistently deals with this uncertainty. The constant spate of breaches seems to indicate that the uncertainty is often not properly accounted for.

Information technology, and by extension information security, is currently a craft.  Some are exceptionally good at their craft, and some are quite poor.  The proliferation of information technology into daily lives has, in my view, created a somewhat low barrier to entry into this craft.  As a result, we have an extremely wide variation in the quality and care with which information technology is implemented.  Similar to furniture or jewelry created by craftsmen, some of it is exceedingly well designed and built and others are complete crap.

Evolving information security into a science has been a personal interest of mine for some time.  I would propose that a key aspect, though not the only aspect by far, of translating information security into a science is a more objective approach to designing and implementing “systems” that are inherently resilient to failure within certain parameters.  Failure to properly engineer at a “system level” view of information technology is what I see most often leading to the most complex security issues.  This will very likely mean that some current technical implementations don’t economically fit into a more scientific future state, which will mean that technology producers will need to adapt accordingly to support the market.

A significant part of this will be clearly understanding the limitations of technology components and designing in a safety margin and detective capabilities that indicate failure.

This is a complicated topic.  I certainly do not think I have the answers, but I believe I can see the problem, or at least some manifestations of the problem.  As Mr. Geer points out in his essay, the way forward is through continued research, continued evolution of our understanding, better defining the “puzzles” that need to be solved and searching for a paradigm that addresses those puzzles, as well as ensuring that practitioners have a common level of competence.

The question is how to start taking those steps.

Thanks to my Twitter friend Rob Lewis (@infosec_tourist) for the link to Mr. Geer’s essay and his constant needling of me in this direction.