Central Banks and Used Switches

Salacious headlines are making the rounds, complete with possibly the worst stock hacker picture ever, indicating that the $81 million dollar theft from the Central Bank of Bangladesh was pretty easy to pull off because the bank used “second hand routers” and implying that there was no firewall employed by the bank.

The money was stolen when criminals hijacked the SWIFT terminal(s) at the Central Bank of Bangladesh, and proceeded to issue transfers totally $1 billion to foreign bank accounts.  Fortunately, most of the transactions were cancelled after the attackers apparently made a spelling mistake in the name of one of the recipients.

We don’t know all that much about how the crime really happened, and a Reuters story gives a little more detail, but not much more, based on comments from an investigator.

We know is the following:

  1. The Central Bank of Bangladesh has 4 “servers” that it keeps in an isolated room with no windows on the 8th floor of it’s office.
  2. Investigators commented that these 4 servers were connected to the bank network using second hand, $10 routers or switches (referred to as both in various sources).
  3. Investigators commented that the crime would have been more difficult if a firewall had been in place.

And so we end up with a headline that reads “Bank with No Firewall…” and “Bangladesh Bank exposed to hackers by cheap switches, no firewall: police”.

The implication is that the problem arose from the quality of the switches and the lack of a firewall.  These factors are not the cause of the problem.  This bank could have spent a few thousand dollars on a managed switch, and a few tens of thousands on a fancy next gen firewall from your favorite vendor.  And almost certainly they would have been configured in a manner that still let the hack happen.  If an organization does not have the talent and/or resources to design and operate a secure network, as is apparently the case here, they we will end up with the fancy managed switch configured to be a dumb switch and the firewall will probably have a policy that lets all traffic through in both directions.  We are pointing the finger at the technology used, but the state of the technology is a symptom, not the problem.

We can infer from the story that the four SWIFT servers in the isolated room are attached to a cheap 5 or 10 port switch, plugged into a jack that connects those systems to the broader, probably flat, bank network.  I strongly suspect that the bank does indeed have a firewall at it’s Internet gateway, but there was very likely nothing sitting between the football watching, horoscope checking, phishing link clicking masses of bank employee workstations to protect those delicious SWIFT terminals in the locked room*.  Or maybe the only place to browse the Internet in private at the bank is from the SWIFT terminals themselves.  After all, the room is small, locked and has no windows**.

It doesn’t take expensive firewalls or expensive switches to protect four systems in a locked room.  But, we apparently think of next gen firewalls as the APT equivalent of my tiger repellent rock***.

*I have no idea if they really do this, but it happens everywhere else, so I’m going with it.

** I have no idea if they did this, either, but I know people who would have done it, were the opportunity available to them.

***Go ahead and laugh.  I’ve NEVER been attacked by a tiger, though.

Behavioral Economics Sightings in Information Security

Below is a list of resources I am aware of exploring the intersection of behavioral economics and information security.  If you are aware of others, please leave a comment.

Website: Applying Behavioral Economics to Harden Cyberspace

Paper: Information Security: Lessons from Behavioural Economics

Paper: Using Behavioural Insights To Improve the Public’s Use of Cyber Security Best Practices

Links: Psychology and Security Resource Page

Book: The Psychology of Information Security

Conference Talks:

 

An Inconvenient Problem With Phishing Awareness Training

Snapchat recently disclosed that it was the victim of an increasingly common attack where someone in the HR department is  tricked into providing personal details of employees to someone purporting to be the company’s CEO.

In response, the normal calls for “security awareness training!” and “phishing simulations!” is making the rounds.  As I have said, I am in favor of security awareness training and phishing simulation exercises, but I am wary of people or organizations that believe this is a security “control”.

When organizations, information security people and management begin viewing awareness training and phishing simulations as a control, incidents like happened at Snapchat are viewed as a control failure.  Management may ask “did this employee not take the training, or was he just incompetent?”  I understand that your gut reaction may be to think such a reaction would not happen, but let me assure you that it does.  And people get fired for falling for a good phish.  Maybe not everywhere.  Investment in training is often viewed the same as investment in other controls.  When the controls fail, management wants to know who is responsible.

If you ask any phishing education company or read any of their reports, you will notice that there are times of day and days of the week where phishing simulations get more clicks than others, with everything else held constant.  The reason is that people are human.  Despite the best training in the world, factors like stress, impending deadlines, lack of sleep, time awake, hunger, impending vacations and many other factors will increase or decrease the likelihood of someone falling for a phishing email.  Phishing awareness training needs to be considered for what it is: a method to reduce the frequency, in aggregate, of employees falling for phishing attacks.

So, I do think that heads of HR departments everywhere should be having a discussion with their employees on this particular attack.  But, when a story like Snapchat makes news, we should be thinking about prevention strategies beyond just awareness training.  And that is hard because it involves some difficult trade offs that many organizations don’t want to think about.  Not thinking about them, however, is keeping our head in the sand.

Probability of Getting Pwnt

I recently finished listening to episode 398 of the Risky Business podcast where Patrick interviews Professor Lawrence Gordon. The discussion is great, as all of Patrick’s shows are, but something caught my attention.  Prof Gordon describes a model he developed many years ago for determining the right level of IT security investment; something that I am acutely interested in.  Professor points out that a key aspect of determining the proper level of investment is the probability of an attack, and he points out that the probability needs to be estimated by the people who know the company in question best: the company’s leadership.

That got me thinking: how do company leaders estimate that probability?  I am sure there are as many ways to do it as there are people doing it, however the discussion reminded me of a key topic in Daniel Kahneman’s book “Thinking Fast and Slow” regarding base rates. Base rates are more or less an average quantity measured against a population for a given concept. For example, the probability of dying in a car crash is about 1 in 470.  That’s the base rate. If I wanted to estimate my likelihood of dying in a car crash, I should start with the base rate and make adjustments I believe are necessary given unique factors to me, such as that I don’t drive to work every day, I don’t drink while driving and so on. So, maybe I end up with my estimate being 1 in 60o. 

If i didn’t use a base rate, how would I estimate my likelihood of dying in a car crash?  Maybe I would do something like this:

Probability of Jerry dying in a car crash <

1/(28 years driving x 365 x 2 driving trips per day) 

This tells me I have driven about 20,000 times without dying. So, I pin my likelihood of dying in a car crash at less than 1 in 20,000. 

But that’s not how it works. The previous 20,000 times I drove don’t have a lot to do with the likelihood of me dying in a car tomorrow, except that I have experience that makes it somewhat less likely I’ll die.  This is why considering base rates are key. If something hasn’t happened to me, or happens really rarely, I’ll assign it a low likelihood. But, if you ask me how likely it is for my house to get robbed right after it got robbed, I am going to overstate the likelihood.

This tells me that things like the Verizon DBIR or the VERIS database are very valuable in helping us define our IT security risk by providing a base rate we can tweak. 

I would love to know if anyone is doing this. I have to believe this is already a common practice. 

The Value of Saving Data (from theft)

I am currently reading Richard Thayler’s new book “Misbehaving: The Making of Behavioral Economics”.  I trust I don’t need to explain what the book is about.  Early in the book, Thayler describes the work leading up to his thesis, “The Value of Saving a Life”, and points out something most of us can relate to: we value a specific person more than we value the nebulous thought of many unnamed people.  Let me give an example: a girl is very sick and needs an expensive treatment that costs $5 million which her family cannot afford and is not covered by insurance. We have seen similar cases, where the family receives a flood of donations to pay for the treatment. Now consider a different situation: the hospital in the same city as the girl needs $5 million to make improvements which will save an average of two lives per year by reducing the risk of certain infections that are common in hospitals. There is no outpouring of support to provide $5 million to the hospital. The person in the first case is specific – an identified life, while we have no idea who the 2 people per year that would be saved are in the second case – statistical lives. Identified lives vs. statistical lives.  If we were “rational” in the economic sense of the word, we should be far more willing to contribute money to the hospital’s improvement program since it will save many more people than just the lone sick girl. But we are not rational. 

There seems to be a powerful implication for information security in this thought: we have trouble with valuing things that are abstract, like the theft of some unknown amount of our data belonging to people who may not even be customers of ours yet. After a breach, we care very deeply about the data and the victims, and not just because we are in the news, may face lawsuits and other penalties, but because the victims are now “real”. We only move from “statistical” data-subjects to “identified” data-subjects after a breach. Post breach, we generally care more about and invest more in security to avoid a repeat because the impacts are much more real to us. 

One of the fundamental tenants of behavioral economics is that we humans often do not act in an economically rational way – this gave rise to calling the species of people who act according to standard economic theory “econs”. It occurs to me that, in the realm of IT security, we would do well to try to behave more like econs.  Of course, it helps to understand the ways in which econs and humans think differently. 

Thinking Graphically To Protect Systems

I recently read this post on TechNet regarding the difference in approaches between attackers and defenders.  Specifically that defenders tend to think of their environment in terms of lists:

  • Lists of data
  • Lists of important systems
  • Lists of accounts
  • etc

But, attackers “think” in graphs.  Meaning that they think of the environment in terms of the interconnections between systems.

I’ve been pondering this thought since I read the TechNet post.  The concept seems to partly explain what I’ve written about in the past regarding bad risk decisions.

My one critique of the TechNet post is that it didn’t (at least in my view) clearly articulate a really important attribute of thinking about your network as a graph: considering the inter-connectivity between endpoints from the perspective of each endpoint.

In our list-based thinking mode, we have, for instance, a list of important systems to protect and a list of systems that are authorized to access each protected system.  What is often lost in this thinking is the inter-connectivity between endpoints down-stream.  As the TechNet article describes it:

“For the High Value Asset to be protected, all the dependent elements must be as protected as thoroughly as the HVA—forming an equivalence class.”

The pragmatic problem I’ve seen is that the farther we get away on the graph from the important asset to be protected, the more willing we are to make security trade offs.  However, because of the nature of the technology we are using and the techniques being successfully employed by attackers, it’s almost MORE important to ensure the integrity of downstream nodes on the graph to protect our key assets and data.

This creates a tough problem for large networks, and I found the comments on the TechNet post slightly telling: “Can you tell me the name of the tool to generate these graphs?”  The recommendations in the TechNet post are certainly good, however often too vague…  “Rethink forest trust relationships”.  That sounds like sage advice, but what does it mean?  The problem is that there doesn’t appear to be a simple or clean answer.  To me, it seems that we need some type of methodology to help perform those re-evaluations.  Or, as I’ve talked about a lot on my podcast, we need a set of “design patterns” for infrastructure that embody sound security relationships between infrastructure components.

Another thought I had regarding graphs: graphs exist at multiple layers:

  • Network layer
  • Application layer
  • User ID/permission layer (Active Directory’s pwn once, pwn everywhere risk)
  • Intra-system (relationship between process/applications on a device)

Final Thoughts (for now)

The complexity of thinking about our environments in graphs shouldn’t dissuade us from using it (potentially) as a tool to model our environment.  Rather, that complexity, to me, indicates that we should likely be thinking about building more trusted and reliable domains (the abstract definition of domain) that relate to each other based on the needs of protecting “the environment”, and less about trying to find some new piece of security technology to protect against the latest threats.

Want To Get Ahead? Create Something!

I get a lot of requests lately asking for career advice.  I’m not sure why, as I don’t feel like I’m a paragon of success or wisdom, but I have tried to help with things like a guide for getting into information security.

Yesterday, I was spending quality time as my dog’s favorite chew toy reflecting on this more and something really obvious hit me.  Painfully obvious.  The most direct way to establish a place in the industry is by creating and contributing.

Think about it: who are the people you respect in this industry?  Even outside this industry?  Why do you respect them?  How do you even know about them?

I will bet it’s because they are creators.

So, my advice for those wanting to get into, or advance in information security is to create something:

  • Conference presentations
  • Open source software
  • Informative blog posts
  • Podcasts

The point is to participate.  It will focus you on getting better and learning.  It will help you meet people.  It will establish you as someone who can add value.

Business Economics Of Data Protection

I recently started listening to “The Portable MBA”, which me reflecting on the business implications of information security. None of what I write below seems new or enlightening; I thought this might spark some interesting discussions and also serve to sharpen my own thoughts.

Business managers need to take risks. Indeed, the fundamental tenants of being in business require risk taking. Generally, these are financial risks and impact investors and directly related parties. For instance, hiring or not hiring another worker is a risk, as is buying a new piece of equipment.

Think for a moment about a piece of manufacturing equipment: once purchased and installed, a business manager generally needs to pay to maintain the equipment to keep it functioning.

The manager can, however, cut back on time and money spent on maintaining the equipment. For a while, this decision will improve profits. Eventually, however, the equipment will stop operating as it should, causing reduced production. The attempt to save money through inadequate maintenance financially hurts both the firm and the manager through lower sales and possibly higher repair costs than those saved originally.

This is sort of tradeoff is very common in business and managers are constantly seeking the optimal level of operational overhead: too much wastes money that could be used for more profitable purposes and too little creates eventual productivity and production problems.

These decisions by the business manager impact those with a stake in the company such as investors, bankers, owners, shareholders and even employees. To some extent, customers are impacted as well, since such decisions may impact prices or availability.

Data security is an odd case considering the above background. As it pertains to securing customer data, the investment decisions made by a business manager directly impact the customers whose data may be stolen, but only indirectly impacts the firm itself. The data may not even belong to the customers of the firm, but rather several layers removed.

This seems to present a conflict of interest: what incentive does a manger have to protect customer data? There appears to be a few likely reasons:

  1. Government regulatory actions
  2. Lawsuits from customers or other impacted parties
  3. Reduced revenues due to customer rejection
  4. Sense of responsibility

One might argue that the free market will reward those firms who act responsibly and punish those that act irresponsibly. On a sufficiently long timeline, that may happen. Recent events appear to indicate that losing customer data does not cause companies to go out of business, and may not even significantly impact customer demand or loyalty.

An interesting attribute is that in the context of information security, the firm that loses the data isn’t the bad actor. The firm is itself victim.

All of this makes me wonder: is the responsibility for storing sensitive data simply incongruent with the objectives of a profit-driven company?

Is it reasonable to expect such companies to invest in security, including potentially reduced productivity of employees, to avoid the possibility of losing sensitive data?   Clearly some companies take the responsibility incredibly seriously, but many others do not and the market forces, to date, don’t seem to be punishing the irresponsible parties (much).

What This CISO Did To Protect His Company’s Data Will SHOCK You!

Good, my click bait title worked and you’re here.   I have my cranky pants on, so lets go.

On last week’s podcast episode, Andy and I talked about Rob Graham’s recent blog post “Dumb, Dumber and cybersecurity” where Rob railed on a buzjournal.com post titled “10 Steps to Protect Your Business From Cybersecurity Threats“.

Rob rightly points out that none of the 10 recommended steps really address the top issues that companies are getting breached by:

  • SQLi
  • Phishing
  • Password reuse

Perhaps I have some Baader-Meinhof going on, but I am seeing these damn “Top X lists to thwart the evil advanced cyber APT nation-state hacker armies of 15 year olds” EVERY WHERE.

Like here, here, here, and here.  I’M QUICKLY REALIZING THAT CREATING THESE LISTS IS AN EPIDEMIC THAT HAS INFECTED THE BRAINS OF MARKETING PEOPLE ALL OVER THE WORLD, CAUSING DIARRHEA OF THE FINGERS.

These stupid lists are nothing more than infosec marketing platitudes…

“Keep your AV up to date!”.  Yeah, that’s going to save you.

“Keep your systems patched!”.  Yep.  Show me an organization that is able to do this, and I’ll send you a link to click on.

“Know where your data is!”.  Sure.  It’s every-fucking-where.  OKAY?  Everywhere.

“Abandon the castle wall philosophy and build protection around the data!”.  What?  I guess Google did this, right?

“Restrict employee access to only that which they need!”.   Least privilege and all that, right?

“Restrict network access to only that which is needed!”

and on and on.

These are all, of course, good ideas.  However, they’re not actionable ideas.  And, as Rob pointed out, most aren’t even the way in which businesses are getting compromised.

Let’s pick on one, just as an example of not being actionable: “Restrict employee access to only that which they need!”

Who could argue with that sage advice?  Well, I will.  The issue is that it doesn’t actually solve much in the real world.  Here’s what I mean: If I’m an accountant and need access to the financial database to run queries, restricting access might mean I get a read only account to run my queries with.  This rarely translates into a consideration of the remaining risks associated with the access I was given.  Is there a better way?  The table I am querying has credit card numbers in it, but our database doesn’t let us restrict my access down to a field level, so in order to do my job, I am given the least access possible, which is still way too much.

And so I click on funnycats.exe, because damn, who doesn’t like funny cats?  And the following Sunday, Brian Krebs is on the phone with my company’s PR person asking for an interview about our data that is for sale on a forum somewhere.  BUT BUT BUT… least privilege was followed.

And so it goes.  Cybersecurity is hard.  It takes thought, analysis and consideration of risks; not a bunch of dumb platitudes.

#getoffmylawn

 

 

 

Security Chaos Monkey

Netflix implemented a wonderfully draconian tool it calls the chaos monkey, meant to drive systems, software and architectures to be robust and resilient, gracefully handling many different kinds of faults. The systems HAVE to be designed to be fault tolerant, because the chaos monkey cometh and everyone knows it.

I believe there is something here for information security. Such a concept translated to security would change the incentive structure for system architects, engineers and developers. Currently, much design and development is based around “best case” operations, knowingly or unknowingly incorporating sub-optimal security constructs.

For lack of a better name, I will call this thing the Security Chaos Monkey (SCM). The workings of a SCM are harder to conceive of than Netflix’s version: it’s somewhat straight forward to randomly kill processes, corrupt files, or shut off entire systems or networks, but it’s another thing to automate stealing data, attempting to plant malware, and so on. In concept, the SCM is similar to a vulnerability scanning system, except SCM’s function is exploitation, destruction, exfiltration, infection, drive wiping, credential stealing, defacement and so on.

One of the challenges with the SCM is the extreme diversity in potential avenues for initial exploitation and subsequent post exploitation activities, many of which will be highly specific to a given system.

Here are some possible attributes of a security chaos monkey:

• Agent attempts to copy random files using random user IDs to some location off the server

• Agent randomly disables local firewall

• Agent randomly sets up reverse shells

• Agent randomly starts listening services for command and control

• Agent randomly attempts to alter files

• Agent randomly connects a login session under an administrative ID to a penetration tester

An obvious limitation is that SCM likely would have a limited set of activities relative to the set of all possible malicious activities, and so system designers may simply tailor their security resilience to address the set of activities performed by SCM. This may still be a significant improvement over current activities.

The net idea of SCM is to impress upon architects, developers and administrators that the systems they build will be actively attacked on a continual basis, stripping away the hope that the system is protected by forces external to it. SCM would also have the effect of forcing our IT staff to develop a deeper understanding of, and appreciation for, attack methods and methods to defend against them.