Get On It

Days like today are a harsh reminder that we have limited time to accomplish what we intend to accomplish in our lives.  We have limited time with our friends and relatives.

Make that time count.  It’s easy to get into the mode of drifting through life, but before we know it, the kids are grown, or our parents are gone, or or friend passed away, or we just don’t have the energy to write that book.

Get on it.  Go make the world a better place.

Intuition and Experience – from Thinking Fast and Slow

I’ve been reading “Thinking Fast and Slow” for the 3rd time now… Technically, I am listening to the audio book, and keep picking up new insights.

The most recent insight is related to intuition.  To net out the topic, intuition, Kahneman believes, is actually only familiarity.  A host of heuristics can influence our perception of how familiar something seems, however we the intuitions of people are often not very good, almost always fairing less well than a basic algorithm.  Therefore, we should be wary when we use intuition to guide an important decision.  Having said that, some people do develop intuition for some things, but two criteria must be met:

  1. The subject of the intuition must be “learnable”.  Some things, such as the stock market, politics, or the outcome of a lottery cannot be learned.  We do not get better at picking lottery numbers or deciding which stock to buy, only lucky or unlucky.  Others, such as fighting fires, playing poker or chess, can be learned, at least to some extent.  They contain repeatable, recognizable patterns.
  2. The person exhibiting the intuition needs to have had an opportunity to learn.  The 10,000 hours rule for chess playing is an example.  The other key element is that the person needs feedback related to the decision.  In the context of playing chess or poker or fighting fires, the person receives feedback quickly.  These two factors combine to build familiarity with specific situations, decisions and their outcomes.

Kahneman recommends asking questions about whether an intuitive judgement is a related to a  process and whether the person exhibiting the judgement has the requisite experience to have developed the intuition.

This is an interesting thought in the context of information security.

By the way, if you have not yet read “Thinking Fast and Slow”, I highly recommend it.  The audio version is excellent, too, even though it is nearly 20 hours long.

On The Sexiness of Defense

For years now, defenders have been trying to improve the perception of defense relative to offense in the IT security arena.  One only has to look at the schedule of talks at the average security conference to see that offense still draws the crowds.  I’ve discussed the situation with colleagues, who also point out that much of the entrepreneurial development in information security is on the offense/red team side of the fence.

That made me reflect on the many emails I receive from listeners of the security podcast I co-host.  Nearly everyone who has asked for advice, except for a few, was looking for advice on getting into the offensive side of security.

I’ve been pondering why that is, and I have a few thoughts:

Offense captures the imagination

Let’s face it, hacking things is pretty cool.  Many people have pointed out that hackers are like modern-day witches, at least as viewed by some of the political establishment.

Offense is about technology.  We LOVE technology.  And we love to hate some of the technology.

Also, offense activities make for great stories and conferences, and can often be pretty easily demonstrated in front of an audience.

Offense has a short cycle time

From the perspective of starting a security business, the cycle time for developing an “offering” is far shorter than a more traditional security product or service.  The service simply relies on the abilities and reputation of the people performing service.  I, of course, do not mean to downplay the significant talent and countless hours of experience such people have; I am pointing out that by the time such a venture is started, these individuals already possess much of the talent, as opposed to needing to go off and develop a new product.

Offense is deterministic (and rewarding)

Penetrating a system is deterministic; we can prove that it happened.  We get a sense of satisfaction.  Getting a shell probably gives us a bit of a dopamine rush (this would be an interesting experiment to perform in an MRI, in case anyone is looking for a research project).

We can talk about our offensive conquests

Offense are often able to discuss the details of their successes publicly, as long as certain information is obscured, such as the name of a customer.

If you know how to break it…

You must know how to defend it.  My observation is that many organizations seek out offense to help improve their defense.

…And then there is defense

Defense is more or less the opposite of the above statements.  If we are successful, there’s often nothing to say, at least that would captivate an audience.  If we aren’t successful, we probably don’t want to talk about it publicly.  Unlike many people on the offense side, defenders are generally employees of the organization they defend, and so if I get up and talk about my defensive antics, everyone will implicitly know which company the activity happened at, and my employer would not approve of such disclosure.  Defense is complicated and often relies on the consistent functioning of a mountain of boring operational processes, like patch management, password management, change management and so on.

Here’s what I think it would take to make defense sex[y|ier]

What we need, in my view, is to apply the hacker mindset to defensive technology.  For example, a script that monitors suspicious DNS queries and automatically initiates some activities such as capturing the memory of the offending device, moving the device to a separate VLAN, or something similar.  Or a script that detects outbound network traffic from servers and performs some automated triage and/or remedial activity.  And so on.

Certainly there are pockets of this happening, but not enough.  It is a bit surprising too, since I would think that such “defensive hackers” would be well sought after by organizations looking to make significant enhancements to their security posture.

Having said all of that, I continue to believe that defenders benefit from having some level of understanding of offensive tactics – it is difficult to construct a robust defense if we are ignorant of the TTPs that attackers use.

Risk Assessments and Availability Bias

This post is part of my continuing exploration into the linkages between behavioral economics and information security.   I am writing these to force some refinement in my thinking, and hopefully to solicit some ideas from those much smarter than I…

===

In well repeated studies, subjects were asked to estimate the number of homicides in the state of Michigan during the previous year.  The response each subject gave varied primarily based on whether the subject recalled that the city of Detroit is in Michigan.  If Detroit does not come to mind when answering the question, the reported homicide rate is much lower than when Detroit, and it’s crime problem, comes to mind.  This is just one of the many important insights in Daniel Kahneman’s book “Thinking Fast and Slow”.   Kahneman refers to this as the “availability bias”.  I’ve previously written about the availability bias as it relates to information security, but this post takes a bit different perspective.

The implication of the Michigan homicide question on information security should be clear: our assessments of quantities, such as risks, are strongly influenced by our ability to recall important details that would significantly alter our judgement.  This feels obvious and intuitive to discuss in the abstract, however we often do not consider that which we are unaware of.  This is Donald Rumsfeld’s “unknown unknowns”.

In the context of information security, we may be deciding whether to accept a risk based on the potential impacts of an issue.  Or, we may be deciding how to allocate our security budget next year.  If we are unaware that we have the digital equivalent of Detroit residing on our network, we will very likely make a sub-optimal decision.

In practice, I see this most commonly result in problems in the context of accepting risks.  Generally, the upside of accepting a risk is prominent in our minds, and the downsides are obtuse, abstract and we often simply don’t have the details needed to fully understand the likelihood, nor the impact, of the risk that we are accepting.  We don’t know to think about Detroit.

Speed of Patching Versus Breach Likelihood

I am a big fan of the Verizon DBIR.  I was just reading this interview with Mike Denning  from Verizon on Deloitte’s web site about this year’s report.  The whole article is worth reading, but I want to focus on one comment from Mr. Denning:

One of the biggest surprises was the finding that 99.9 percent of the exploited vulnerabilities had occurred more than a year after a patch, which quite possibly would have prevented them, had been published. Organizations are finding it difficult to maintain the latest patch releases. Additionally, the finding speaks to the challenges of endpoint security.

Today, coverage is more important than speed because, through scanning and other methods, attackers are able to find the weakest link in the chain and then quickly move laterally within the organization. …

This comment brought back some thoughts I had when I initially read the 99.9% statistic in the 2015 DBIR.  That number, while a bit surprising, fits the intuition most of us in the field have.  My concern, however, is that this may be interpreted as meaning the following:

“we can exclude ourselves from 99.9% of breaches by just ensuring we keep up with our patching.  After all, we should be able to meet the goal of applying patches no later than, say, 11 months after release.  Or 6  months.”

I see two problems with this thinking:

  1. Few organizations can apply EVERY patch to EVERY system.  Sometimes we consciously “exempt” systems from a patch for various business reasons, sometimes we simply don’t know about the systems or the patches.  If this is the case in your organization, and you get compromised through such a missing patch, you are part of the 99.9%.  You don’t get credit for patching 99.9%.  I wonder how many organizations in the 99.9% statistic thought they were reasonably up-to-date with patches?
  2. Outside of commodity/mass attacks, adversaries are intelligent.  If the adversary wants YOUR data specifically, he won’t slam his hands on the keyboard in exasperation because all of his year-plus old exploit code doesn’t work and then decide the job at McDonalds is a better way to make a living.  He’ll probably try some newer exploit until he finds one that works.  Or maybe not.

My point is not to diminish the importance of patching – clearly it is very important.  My point is, as with any given control, thinking that it will provide dramatic and sweeping improvements on its own is probably a fallacy.

 

 

Dealing With The Experience Required Paradox For Those Entering Information Security

I’ve been co-hosting the Defensive Security Podcast for a few years now and receive many emails and tweets asking for advice on getting into the information security field.  I created a dedicated page on the defensive security site with some resources for newcomers to the cybersecurity/information security field.  I asked for advice and received a lot of great feedback, which I incorporated on that page.

I’ve since received feedback that the page is very helpful, however I’m now being asked for advice on addressing a new challenge: how to get a job in information security when all the information security jobs require previous information security experience?

Once again, I turned to my excellent network on Twitter to ask for help in answering that question.  This post is intended to summarize the comments I’ve received.

Networking

Network with people in the community by attending local events, such as BSides conferences, ISSA meetings, OWASP meetings, CitySec meetings and so on.

People who attend such meetings are generally aware of openings in their respective organizations, and having an advocate “on the inside” to get through the hiring process is often very helpful.

I will add that researching a topic and giving a presentation at one of these meetings will help to establish yourself as an authority on the topic.  These organizations are often looking for someone to give a presentation.  The process will force you to thoroughly learn your chosen presentation topic and refine your presentation skills, both of which make you a more valuable employee.

Volunteering

Non-profit and not-for-profit organizations, including churches, often can’t afford to pay for information security staff.  Volunteering at an organization is a good way to obtain practical experience.  These kinds of volunteer opportunities can lead to

Contributing to open source projects are another way to not only gain practical experience for your resume, it will also build important skills and build your network of contacts.  There are thousands of information security open source projects around.  Getting to a place where you can contribute to an open source project can be daunting, but the benefits will be worth it.  My best advice is to look for a project that interests you, find the list of open issues and/or pending features, contact the existing developers and ask if they would be willing to entertain your contribution to fix bug X or add feature Y.  Also, don’t be too offended if the developers give your contribution some criticism the first few times around.

Ground Floor

A common question goes something like this: “I want to get into information security, but this position requires years of experience…  How do I get into the field if I have to have experience in the field in order to get into the field?”

Getting into more senior level positions in any field will generally require previous experience in the field.  Generally, these more senior level positions are filled through career progression, not by someone coming in from a different field.  Said another way, you may need to look for a more entry level position that requires less experience, and then build toward your target position.

This can be disheartening for someone who has obtained a more senior level role in another field and is looking to move into information security.  Taking a lower position to get into the security field may require a pay cut.

My recommendation with this strategy is to find an organization that has both entry level positions and the more senior level positions you are interested in, or at least something close to the senior level position.  It’s often faster and easier to get into an organization in a lower level position and take on additional responsibilities, and ultimately progress up to the target position, though this strategy often means that your compensation will be less than market rate.  My advice is to get in on the ground floor, work your way up and gather some experience, and then seek the opportunity you are after.  Clearly, this is not a 6 month plan to get to a senior architect, however combining it with some other advice in this post may make it happen relatively fast.

Leverage Existing Experience

You may not have experience in information security but if you work in IT you likely have had some exposure to security processes.  Maybe it was related to following secure coding practices, or securing servers according to some documentation, or applying patches or any number of other things.  Spend some time to think about how these past experiences related to information security and develop your elevator pitch tying them to the job you want.

Certifications

Certifications are a good way to establish some credibility, particularly with managers and HR departments.  Many security professionals are skeptical about the utility of certifications like CISSP and CEH, however both carry some weight when seeking a security role.  As well, they will help you to learn some of the language and expose you to different aspects of security, which may, in turn, highlight some particular area of interest for you.  Those two, in particular, are within the grasp of most people willing to spend time studying and are not incredibly expensive.

Home Lab

One of the most commonly suggested recommendations is a home lab.  Of course, that can cost some money to set up, but it doesn’t have to cost a lot.  AWS offers free virtual servers.  Running VMs in Virtualbox on your existing PC works.

My recommendation going into the home lab arena is to have an idea of where you want to go.  Malware analysis? Incident response? Security architecture? Penetration testing?

Depending on your area of focus, you will have different needs for a home lab.  A detailed discussion on possible configuration options for home labs for each of those focus areas would fill pages.  If there is interest, I’ll work on that as well.

Blog

Blogging serves four purposes:

  1. it forces you to research a topic and understand it well enough to write something informative
  2. it helps to improve your writing abilities, which is very important
  3. it (hopefully) helps other people
  4. it helps to establish your name in the industry

Branding Yourself

There’s a lot of good resources on personal branding, and I am not qualified to really do the topic justice, but I will point out a few aspects I think are very key:

  1. Consider how your social media presence would be viewed by prospective employers.  Most all employers will do at least some minimal amount of research on you.  What will they find?  Will they see rants, complaints about current positions, or socially and politically divisive comments?
  2. Build a social network of people in the industry, particularly those in the specific area you are interested in.  Ask questions and contribute to the discussions.
  3.   Make contributions to the industry.  Blog.  Podcast.  Offer to help people.
  4. Clearly identify the position you want, and develop your story on how your experience in work, volunteering, home labs, blogging and so on, relate to that position.

Employers don’t want to hire a problem child.   They want to hire a productive person who is well respected.  I would recommend seeking out other resources on personal branding to learn more.

Speaking, Writing and Presenting

This didn’t come up as a recommendation, however I will tell you that finding information security professionals who are able to write and speak clearly can sometimes be a challenge.  Remember: your writing and your speaking are often the only things that people, including prospective employers, know about you, and they will form initial opinions of you very quickly.  Make them count.  Take pride in your writing style.

Freakonomics for Information Security

There are many big questions in IT security.  Big questions that have significant implications.  There isn’t a venue, outside of security conferences and academic papers, for such questions to be asked and answered.  Security vendors often step in and provide answers, restating questions in a way that suits the vendor’s product portfolio.

I’m a fan of Freakonomics.  Some of their work is controversial to be sure, however they attempt to answer questions few people even think to ask, but which often have significant implications for society.

I’ve been thinking: IT security could really benefit from a Freakonomics-like ‘think tank’ and not only try to answer some of the hard questions, but indeed think of the hard questions to ask.  Questions that may be unpopular, particularly with vendors.  Questions like:

  • What is the limit of the effectiveness of security awareness training?
    • What factors influence this limit?
  • Is there a relationship between the level of a person targeted in an organization and the size or cost of a resulting breach?
  • What is the optimal strategy for picking an anti-virus vendor?
    • What would happen if we didn’t use anti-virus?
  • Is there a relationship between the ratio of IT budget to IT security budget and the likelihood of being breached?
  • Are mega-breaches actually  rare, despite the headlines?
  • Is there a way to estimate the frequency that organizations are breached, but don’t know it?
  • How often are risk assessments  wrong?
  • What is the optimal strategy to prioritize patches?
  • How informative and useful are security vendor research reports, like the DBIR and M-Trends?
  • How quickly do I need to detect an attack happening in order to prevent data loss?
    • What does this say about the level of investment we should give to detection versus protection?
  • What alternatives exist to the current IT security arms race?
  • How much of responsibility should the designers of IT systems carry in a breach vs. the end user(s) who were involved?
  • How does the life cycle of IT systems impact security/security breaches?
    • For instance, the old, unsecurable OPM application, Windows XP/2003, and the move to the cloud
  • Are some IT development processes more “risky” than others?
  • Is it reasonable for a company, who is trying to maximize profit, to invest what is actually needed to properly secure it’s systems?
  • Is there a relationship between the background and experience of IT and/or infosec staff and the likelihood of being beached?
  • Are targeted attacks actually targeted?  Or do they just seem that way after the fact?
  • How quickly is the sophistication of attackers advancing?
  • …and many, many more.

Are these questions already being asked and answered?  How much interest is there in such a thing?

Lies, Damn Lies and Statistics

A message came through the Security Metrics mailing list yesterday that got me thinking about our perception of statistics.  The post is regarding a paper on the security of an electronic voting system.

I’ll quote the two paragraphs I find most interesting:

To create a completely unhackable system, Smartmatic combined the following ideas: security fragmentation, security layering, encryption, device identity assurance, multi-key combinations and opposing-party auditing. Explaining all of them is beyond the scope of this article.

The important thing is that, when all of these methods are combined, it becomes possible to calculate with mathematical precision the probability of the system being hacked in the available time, because an election usually happens in a few hours or at the most over a few days. (For example, for one of our average customers, the probability was 1 × 10−19. That is a point followed by 19 zeros and then 1). The probability is lower than that of a meteor hitting the earth and wiping us all out in the next few years—approximately 1 × 10−7 (Chemical Industry Education Centre, Risk-Ed n.d.)—hence it seems reasonable to use the term ‘unhackable’, to the chagrin of the purists and to my pleasure.

The claim here appears to be that the number of robust security controls included in the system, all of which have a small chance of being bypassed taken together, along with the limited time that an election runs yields a probability of 1×10^-19 of being hacked, which is effectively a probability of zero.

A brief bit of statistical theory: the process for calculating the probability of two or more events happening at the same time depends on whether the events are independent from each other.  Take, for example, winning the lottery.  Winning the lottery a second time is in no way related to winning the lottery a first time…  You don’t “get better” at winning the lottery.  Winning the lottery is an independent event.  If the odds of winning a particular lottery are one in a million, or 1/1000000, the probability of winning the lottery twice is 1/1000000 x 1/1000000, which is 1/1000000000000 or 1×10^-12.  However, many events are not actually independent from each other.  For example,  I manage a server and the probability of the server being compromised through a weak password might be 1/1000000.  Since I am clever, getting shell on my server does not get you access to my data.  To get at my data, you must also compromise the application running on the server through a software vulnerability and the probability of that might also be 1/1000000.   Does this mean that the probability of someone stealing my data is 1×10^-12?  These events are very likely not independent.  The mechanism of dependence may not be readily apparent to us, and so we may be apt to treat them as independent and decide against the cyber insurance policy, given the remarkably low odds.  Upon close inspection, there is a nearly endless list of ways in which the two events (getting a shell, then compromising the application) might not be independent, such as:

  • Password reuse to enter the system and application
  • Trivial passwords
  • Stealing data out of memory without actually needing to break the application
  • A trivial application bug that renders the probability of compromise closer to 1/10 than 1/1000000
  • An attacker phishing the credentials from the administrator
  • An attacker using a RAT to hijack an existing authenticated connection from a legitimate user
  • and many, many more

When we see the probability of something happening stated as being exceedingly low as with 1×10^-19, but then see the event actually happen, we are right to question the fundamental assumptions that went into the calculation.

A practical example of this comes from the book “The Black Swan” in which Taleb points out the Nobel Prize winning Modern Portfolio Theory  calculated the odds of the 1987 stock market crash to be 5.51×10^-89.

My experience is that these kinds of calculations happen often in security, even if only mentally.  However, we make these calculations without a comprehensive understanding of the relationships between systems, events and risks.

Outside of gambling, be skeptical of such extraordinary statements of low probabilities, particularly for very important decisions.

 

Wisdom of Crowds and Risk Assessments

If your organization is like most, tough problems are addressed by assembling a group of SMEs into a meeting and hashing out a solution.  Risk assessments are often performed in the same way: bring “experts” into a room, brain storm on the threats and hash out an agreed-upon set of vulnerability and impacts for each.   I will leave the fundamental problems with scoring risks based on vulnerability and impact ratings for another post[1].

“None of us is as smart as all of us” is a common mantra.  Certainly, we should arrive at better conclusions through the collective work of a number of smart people.  We aren’t.  Many people have heard the phrase “the wisdom of crowds” and implicitly understood that this reinforces the value of the collaborative effort of SMEs.  It doesn’t.

The “wisdom of crowds” concept describes the phenomenon where a group of people are each biased in random directions when estimating some quantity.  When we average out the estimates of the “crowd”, the resulting average is often very close to the actual quantity.  This works with the estimates are given independently of one another.  If the “crowd” collaborates or compares ideas when estimating the quantity, this effect isn’t present.  People are heavily influenced by each other and the previously present array biases are tamped down, resulting in a estimates that reflect the group consensus and not the actual quantity being analyzed.

The oft cited example is the county fair contest where the crowd writes down his or her guess for the weight of a cow or giant pumpkin on a piece of paper, drops the paper in a box and hopes to have the closest guess to win the Starbucks gift card.  Some enterprising people have taken the box of guesses and averaged them out and determined that the average of all guesses is usually very close to the actual weight.  If, instead, the fair goers were somehow incentivized to work together so that they only had one guess, and if that guess were within, say 2 pounds of the actual weight, the entire crowd won a prize, it’s nearly a sure thing the crowd would lose every time, absent some form of cheating.

With this in mind, we should consider the wisdom of our risk assessment strategies.

[1] In the mean time, read Douglas Hubbard’s book: “The Failure of Risk Management”.

How Do We Know We’re Doing a Good Job in Information Security?

Nearly every other business process in an organization has to demonstrably contribute to the top or bottom lines.

  • What return did our advertising campaign bring in the form of new sales?
  • How much profit did our new product generate?
  • How much have we saved by moving our environment “to the cloud”?

Information security is getting a lot of mind share lately among executives and boards for good and obvious reasons.  However, how are those boards and executives determining if they have the “right” programs in place?

This reminds me of the TSA paradox…  Have freedom gropes, nudie scanners and keeping our liquids in a clear ziplock bag actually kept planes from falling out of the sky?  Or is this just random luck that no determined person or organization has really tried in recent years?

If our organization is breached, or has a less significant security “incident”, it’s clear that there is some room for improvement.  But, do no breaches mean that the organization has the right level of investment, right technologies properly deployed, right amount of staff with appropriate skills and proper processes in place?  Or is it just dumb luck?

Information security is in an even tougher spot than our friends the TSA here.  A plane being hijacked or not is quite deterministic: if it happened, we know about it, or very soon will.  That’s not necessarily the case with information security.   If a board asks “are we secure?”, I might be able to answer “We are managing our risks well, we have our controls aligned with an industry standard, and the blinky boxes are blinking good blinks.”  However, I am blind to the unknown unknowns.  I don’t know that my network has 13 different hacking teams actively siphoning data out of it, some for years.

Back to my question: how do we demonstrate that we are properly managing information security?  This is a question that has weighed on me for some time now.  I expect that this question will grow in importance as IT continues to commoditize and security threats continue to evolve and laws, regulations and fines increase, even if public outrage subsides.  Organizations only have so much money to invest in protection, and those that are able to allocate resources most effectively should be able to minimize costs of both security operations and of business impacts due to breaches.

I recently finished reading “Measuring and Managing Information Risk: A FAIR Approach”, and am currently reading “IT Security Metrics”.  Both are very useful books, and I highly recommend anyone in IT security management read them.   These are generally “frameworks” that help define how, and how not to, assess risk, compare risks and so on.  In the context of a  medium or large organization, using these tools to answer the question “are we doing the right things?” seems intuitive, however at the same time, so mind bogglingly complex as to be out of reach.  I can use these to objectively determine if I am better off investing in more security awareness training or a two factor authentication system, however it won’t inform me that I should have actually spent that extra investment on better network segmentation, since that risk wasn’t on the radar until the lack of it contributed to a significant breach.

Also, there really is no “perfect” security, so we are always living with some amount of risk associated with the investment we make.  Since our organization is only willing or able to invest so much, it explicitly or implicitly accepts some risk.  That risk being realized in the form of a breach does not necessarily mean that our management of information security was improper given the organizational constraints, just as not having a breach doesn’t mean that we ARE properly managing information security.

Without objective metrics that count the number of times we weren’t breached, how does the board know that I am wisely investing money to protect the organization’s data?

Is this a common question?  Are good leaders effectively (and responsibly) able to answer the question now?  If so, how?