NCSAM Day 17: Inventory Your Components

An often-overlooked aspect of vulnerability management are software components that exist on a system, such as PHP, Apache Struts, and Ghostscript.  These components are often dependencies to other applications.  If the packages are installed through a normal package manager, like yum or apt, updates should be applied during periodic updates.  There are three things to be aware of, though:

  1. If a package goes end of life, like what is about to happen PHP5, updates may simply and silently stop being applied, leaving a potentially vulnerable piece of software running on a system.
  2. If a component is custom compiled, a package manager will not apply updates. Note: this is an argument in favor of using binaries provided by main stream repositories
  3. Vulnerability scans may not be able to detect vulnerabilities in such components, particularly if using unauthenticated scans.

As we move toward infrastructure-as-code, maintaining these inventories should be less taxing, since the configuration definition for systems should explicitly contain the packages installed.  If not, then you’re doing IAC wrong.

Create a list of all these components that exist in your environment, and determine what process is used to identify a vulnerability in them and ensure each is updated when necessary.  Many may be updated in the normal course of running operating system updates, while others may require manual tracking to identify when to download, compile, and install updated source code.

It’s hard to manage what you don’t know you have.

NCSAM Day 16: Ransomware Happens

Sometimes, despite our best efforts, ransomware will successfully invade our systems.  The need for good back ups should be well known by now, but here are a few recommendations:

  • Several organizations impacted by the likes of SAMSAM have opted to pay the ransom to recover their data, despite having good backups. This is apparently happening because the time and cost to restore all the impacted systems and data from backup is substantially higher than the cost of the ransom.   I previously wrote about why it’s a bad idea to simply clean an infected or compromised system and paying the ransom to get back to operations faster is basically doing just that.  I argue that an organization that is tempted to pay the ransom in such a case probably did not properly assess the RTO and RPO that were needed when designing and implementing its recovery program.
  • Be aware that some “backup” schemes use real time or near real time replication between systems in remote sites. Ransomware-encrypted files will be replicated to the backup location nearly instantaneously.  Remember to make backups or take snapshots if you’re using such a recovery scheme.
  • This shouldn’t have to be said, but you (probably) don’t have a backup if you haven’t tested your backup. Test them.
  • Over time, we will likely see a shift to injecting techniques that render backups useless to help force ransom payment.
  • One of the insidious aspects of some ransomware like SAMSAM is that is can effectively take out all systems on a network. Consider your ability to initiate recovery if all of your administrators have locked up workstations and your (ugh) Sharepoint repository of recovery plans are all encrypted.
  • I previously wrote a longer post on how to prevent ransomware infections.

 

NCSAM Day 15: RDPmageddon

Remote Desktop Protocol, RDP, is becoming a common entry point for bad actors, including many POS terminal breaches, and the main delivery method for enterprise-grade ransomware, like SAMSAM.  An underground economy has developed around finding and then selling credentials to access various organizations through RDP.

Though it is a legitimate tool for administering Windows systems remotely, RDP should never be exposed directly to the internet.  Firewalls should be configured to disallow RDP access from Internet sources, and systems that run RDP must not permit accounts with default or weak passwords.

Use a service like Shodan to scan your organization’s address ranges to identify RDP services exposed to the Internet.  Workstations should be configured via GPO to disable RDP, enabling it only by exception.

Day 14: Understand the Limitations of Security Awareness Training

We alternately hear “people are the first line of defense” or “people are the last line of defense” in cyber security.  I haven’t figured out which one is true.  Regardless, we need to understand that there are limits to the effectiveness of awareness training and that our first line of defense or our last line of defense (whichever is correct) is quite fallible.

It comes as no surprise to anyone that training humans is not like defining a rule base in a firewall.  We tell a firewall what network traffic to permit, and what to block based on attributes of the traffic.  Similarly, we train our employees on how to identify and resist various types of attacks.  Firewalls will dutifully and predictably follow the rules it was programmed with.  Humans, however, are a different story.

Let’s imagine for a moment that we have developed a perfect security awareness program.  It clearly communicates dos and don’ts, how to spot attacks, how to report problems, and so on, in a way that is memorable and engaging.  I propose that the outcome will be significantly less than perfect, because of the following factors:

  • People act irrationally under stress from things such health problems, family problems, medication, and lack of sleep
  • Any given person will act upon the same set of conditions differently based on the time of day, proximity to lunch, day of the week, and many other factors that affect his or her frame of mind at the time
  • People in a business setting generally have incentives that may, at least some of the time, run contrary to the recommendations of awareness training, such as project deadlines, management expectations, and so on.

This should tell us that awareness training is, at best, a coarse screen that will catch some problems, but allow many others to pass unimpeded.  As such, we should focus on providing awareness education that provides the biggest value, in terms of outcomes, and then focus our remaining effort on enhancing process and technical controls that are designed to provide more predictable, and repeatable security outcomes, similar to the operation of a firewall.

On a related note, I personally think it’s irresponsible to pin the safety of an organization’s systems and data on an employee recognizing that a potentially sophisticated attack.  For this reason, I think it is incumbent on us to develop and implement systems that are resilient to such attacks, and allows employees to focus on their job duties.

NCSAM Day 13: Track Ownership Of Applications

Web applications are among the most common entry points in data breaches and network intrusions.  In the best of conditions, defending web applications can be challenging, but I’ve observed that the are often orphaned as priorities change, staff turns over, and in the wake of organizational changes.

This is not a problem in all organizations, particularly in smaller firms that have a monolithic IT department that manages all technology, though I have seen the problem in small companies, too.

Similar to tracking servers and workstations, organizations should have a system in place to track the ownership of applications, and periodically revalidate ownership and force reassignment when the designated owner leaves or transfers.

The application owner should be responsible for understanding all of the components that compose the application and ensure that each component is properly vulnerability scanned and patched, or shut down and deleted if no longer needed.

Organizations should be on the lookout for applications not being properly maintained through vulnerability scans and other tools that may be available.  Unmaintained applications should be treated as an incident to investigate.

NCSAM Day 12: Down With The Sickness

While I previously wrote that the cloud is not a magical place, I think it’s important to point out that there is a sickness in the IT world.  It’s insidious and seems to hang around Kanban boards like West Nile laden mosquitos hang around a pond.  Of course, I’m talking about exposed S3 buckets and NoSQL/MongoDB databases.

The fundamental issue appears to be that the those who configure these environments do not know what they don’t know.  We need to take down this sickness.  Unfortunately, there is no blinky box that can fix this problem*.  Rather, employee awareness and support are needed.  For example, include a segment in your organization’s mandatory security training to engage the IT or IT security team for guidance on the proper use of such services.  Yes, this may encourage some people who may not otherwise have thought to copy the contact database into an S3 bucket, and may drive up work on the IT team, but it’s better than the alternative.  If you offer help rather that harsh criticism, you may just get people to ask for that help.

I suppose it should go without saying that your organization’s IT and security teams should themselves know how to properly use these services as a start.

*depends on your willingness to believe CASB vendor marketing pitches.  YMMV.

NCSAM Day 11: Test Cases for Security Infrastructure

Recently disclosed details about the Equifax data breach indicate that, in addition to the Apache Struts vulnerability that initially led to the breach, some security tools had stopped working for an extended period of time, and only after those tools were brought back online was the breach detected.

There are many potential reasons for security technology to fail, but quite often we don’t recognize that they have failed, because we are only monitoring for alerts, or simply assuming that the security “thing” is quietly doing its job.  When those technologies do fail and we’re not aware that they’ve failed, a key element of our security program is not working, and if we’re not actively monitoring for those failures, we don’t know that we’re blind or unprotected.

For this reason, I recommend developing a set of ongoing test cases that are implemented along with new security technology to help ensure that the technology is operating as expected and raise an alert when it fails in some way.  For example, a SIEM should be configured to trigger an alert if a log source does not provide a log within a certain timeframe, which may indicate that the logging service died on the host, or some network issue is preventing logs from being sent to the SIEM.  Another example might be a periodic injection of a particular type of network “attack” (in a relatively safe manner, of course) designed to trigger an IPS block and alert, in a manner that tests both the blocking (did the “attack” make it to the destination?) and the alerting (did the “attack” result in a generated alert?).

These test cases should be developed to measure the ongoing effectiveness of all the key functionality that the security technology provides.

NCSAM Day 10: Email Security

Here we are, after decades of security enhancements, blinky boxes, and hundreds of hours of security awareness training and companies still get compromised through email.  My movement to drive everyone back to using pine, mutt, and elm for email has failed miserably, so here are my next best recommendations:

  • Strongly consider not doing email, or at least email filtering, on your own. I don’t advocate for particular technology vendors, but most of the big names, like Proof Point and others, have pretty good mail filtering capabilities that you’re just not going to match.   Save your efforts for security programs that are unique to your organization.  Email is a commodity service these days.
  • Prepend the tag “[external]” to the subject line of incoming email from the Internet to serve as a visual cue for employees. It’s not foolproof, particularly in the context of business email compromises where malicious emails can originate locally, but it can help and gives some fodder for awareness training.
  • If you do use a service, such as Proof Point, that rewrites URLs in emails and/or add the “[external]” tag, be wary of the way in which you run phishing simulation exercises. If the simulation emails appear to come from outside the organization, but do not have the “[external]” tag, or do not have URLs rewritten in the way that all other external emails do, employees will quickly learn to identify the simulation emails based on those characteristics, rather than the characteristics you want them to observe.
  • Tailor awareness training by role. If someone has a job that requires them to open attachments from strangers, such as is the case with recruiters, don’t give them training that tells them not to open such attachments.  At best, it’s confusing.  Rather, provide guidance on the proper means for various roles in the organization to do their jobs in a safe manner.
  • Be aware that every hacker and her dog are trying to get into your organization’s email and act accordingly.  Require two factor authentication for mail access, particularly for any cloud-based mail that is accessible straight from the Internet.

NCSAM Day 9: The Cloud Isn’t A Magical Place

Traditional IT environments generally required the coordination of different people and different teams to turn on a new service.  There might have been a datacenter person involved, a network person, a server person, a firewall person, and an application person involved, each playing a part to install a new server, connect it to the network, install and configure the operating system, install and configure the application, and finally, open expose the application through the firewall.  Some of those functions were consolidated into the same person or team, but in most cases, each function felt ownership for their role and generally had a set of guidelines some level of competence, including knowing what questions to ask, and when to push back if something seems too risky with a planned deployment.

All of this necessarily added up to delays and inefficiencies.  Reducing or eliminating these delays are one of the many benefits that cloud computing offers: we no longer need to rack servers; installing operating systems is automated through orchestration tools; the provider offers an easy to configure software defined network; and so on.  The move to cloud reduces or eliminates many of the IT specializations, like sysadmin, network engineer, or firewall engineer.  In the cloud, those functions no longer exist as specialties, and depending on the way in which cloud is used (for example cloud native versus rehoming server images to the cloud), simply may not be required at all.

The cloud isn’t magical though, and it still requires good security practices, and those must very likely happen without the watchful eye of the delay inducing specialists.  The way that many organizations that successfully adopt the cloud, and related practices, such as devops, is using scripted processes that are designed to ensure environments are created, configured, and managed in a secure(ish) manner.

All this despite most cloud providers’ claims that their cloud is “secure”.  Hopefully it’s apparent what the providers mean, and what they don’t mean:  generally, their description as “secure” refers to the components of the cloud infrastructure that the provider is responsible for managing, and it is understood that the cloud consumer is responsible for managing and securing everything else, which is quite a lot.

Embracing cloud isn’t just saving capital expenses and laying off administrators.  The agility and speed require even tighter processes than traditional IT, but those processes can hopefully be scripted, automated, and orchestrated.  An organization moving to the cloud needs to invest in the right skills and tools to keep the environment secure.  Unfortunately, these skills are in high demand right now, but that is the tradeoff.

NCSAM Day 8: Work on your policies

In many organizations, security policies and standards are unapproachably long and complex, or are so high-level that the reader must be a security expert to fill in missing details.  Security policies, standards, processes, and procedures must be written for the people who need to follow, implement, and interpret them, not for the people that write them.  These documents need to clearly define expectations and outcomes in a way that can be understood and implemented.

For example, a policy might state “You may not copy files containing company confidential information to USB drives.”

But, what about copying those files to other types of devices, like a home NAS drive that is exposed to the Internet?  Or someone’s clever home-brew cloud backup system using an unsecured S3 bucket? Or a cell phone via Bluetooth?  And how should employees legitimately back up their data?  What happens when they need to copy confidential files to a USB drive?  Do they get to figure out the proper controls to apply?

This extends to policies that apply to IT and infosec teams, too.  Define the set of outcomes desired and the proper guard rails that need to be applied, at the appropriate level of specificity based on the type of documentation (policy, process, procedure, and so on), ensure employees are familiar with those documents, and provide help to interpret the requirements for edge cases and fold any lessons learned back into policy enhancements and FAQs.