Seven Critical Things To Protect Your Infrastructure and Data

Given some recent happenings in the world, I felt it important to get the word out on a few really key things we need to do/stop doing/do differently as we manage our infrastructure to help prevent data breaches.  This is probably more relevant to IT people, like sysadmins, so here goes…

  1. KEEP A FREAKING INVENTORY OF YOUR SYSTEMS, THEIR IP ADDRESSES, THEIR FUNCTIONS, AND WHO TO CONTACT.  Why is this so hard?  Keep it up to date.  By the way, we know this is hard, because it is the #1 control on the CIS Top 20 Critical Cyber Security Controls.  If you’re all cloud-y, I’m sure you can find a way to stick some inventory management into your Jenkins pipeline.
  2. Monitor the antivirus running on your servers.  Unless the server is a file server, if your AV detects an infection, one that you’re reasonably confident is not a false positive, you should proceed immediately to freak out mode.  While workstations ideally wouldn’t be exposed to viruses, we intuitively know that the activities of employees, like browsing the internet, opening email attachments, and connecting USB drives, and so on, will cause a workstation to be in contact with a steady stream of malware.  And so, seeing AV detection and blocks on workstations gives us a bit of comfort that the controls are working.  You should not feel that level of comfort with AV hits on servers.  Move your servers to different group(s) in the AV console, create custom reports, integrate the console with an arduino to make a light flash or to electrify Sam’s chair – I don’t really care how you are notified, but pay attention to those events and investigate what happened.  It’s not normal and something is very wrong when it does happen.
  3. If you have determined that a server is/was infected with malware, please do not simply install the latest DAT file into your AV scanner and/or ran Malwarebytes and the server and put the system back into production.  I know we are measured by availability, but I promise you that, on average, this approach will cause you far, far, far less pain and downtime than the alternative.  When a server is infected, isolate it from the network, try to figure out what happened, but do not put it back into production.  You might be able to clean the malware with some tool like Malwarebytes, but you have no idea if there is a dropper still present, or what else was changed on the system, or what persistence mechanisms may have been implanted.  Build a new system, restore the data, and move on, while trying to figure out how this happened in the first place.  This is a great advantage of virtualized infrastructure, by the way.
  4. If you have an infected or compromised system in the environment, check other systems for evidence of similar activity.  If the environment uses Active Directory, quickly attempt to determine if any administrative accounts are likely compromised, and if so, it’s time to start thinking about all those great ideas you’ve had… you know the ones about how you would do things differently if you were able to start over?  This is probably the point at which you will want to pull in outside help for guidance, but there is little that can be done to assure the integrity of a compromised domain.  Backups, snapshots, and good logging of domain controllers can help more quickly return to operations, but you will need to be wary about any domain-joined system that wasn’t rebuilt.
  5. Periodically validate that you are collecting logs from all the systems that you think you should be, and ensure you have the ability to access those logs quickly.  Major incidents rarely happen on a Tuesday morning in January.  They usually happen late on the Friday of a long weekend, and if Sally is the only person who has access to the log server and she just left for a 7 day cruise, you’re going to be hurting.
  6. Know who to call when you are in over your head.  If you’re busy trying to figure out if someone stole all your nuclear secrets, the last thing you want to be doing is trying to interview incident response vendors, get quotes, and then approval for a purchase order.  Work that stuff out ahead of time.  Most 3rd party incident response companies offer retainer agreements.
  7. Know when you are in over your head.  The average IT person believes they have far above average knowledge of IT[1], but the tactics malware and attackers use may not make sense to someone not familiar with such tactics.  This, by the way, is why I am a strong advocate for IT, and network/system admins in particular, to spend some time learning about red team techniques.  Note, however, that this can have a significant downside[2].

 

1. Yes, I made that up, but Dunning-Kruger tells me I’m probably right.  Or maybe I am just overconfident in my knowledge of human behavior…

2. Red team is sexy, and exposing sysadmins to those tactics may cause a precipitous drop in the number of sysadmins and a sudden glut of penetration testers. Caveat Emptor.

Leave a Reply

Your email address will not be published. Required fields are marked *