I already published my ground-breaking infosec predictions for 2019, but I also want to say thank you to all the great people that I’ve had the privilege to work with and have met, even if only through social media. I appreciate every one of you.
One of the things that I’ve come to learn about human behavior is that we tend to more consistently accomplish our goals if we make a commitment to others. Here are the items I want to accomplish, or at least get started, in 2019:
Reinvigorate the Defensive Security Podcast. I’ve let myself become increasingly busy and the frequency of new episodes has suffered as a result.
Develop my interest and passion about the intersection of behavioral economics/behavioral psychology and information security into something meaningful: a blog, a podcast, a book. I don’t know what form this will take yet, but I believe there is a significant opportunity to advance the field of information security.
Grow my previous efforts to help people enter the information security field. I don’t know what form this will take yet, but in the past I’ve provided e-books and tried to match people looking for work with companies looking for employees, mostly via Twitter.
Learn much more about cloud computing, SDN, and how to incorporate security and resiliency into these environments, as well as how to capitalize on the new capabilities these environments provide for security and resiliency.
Develop an idea I have had around “componentizing” IT infrastructure designs, in a similar manner to “design patterns” for software development. I wrote a bit about this in the past. I don’t know what form this will take – maybe a wiki or something similar.
Deliver a presentation, without completely panicking, at a conference.
This is the time of year where all security vendors take stock of what they need to sell in the coming months and produce a litany of terrifying predictions that can be thwarted if you, the hapless victim, will start writing purchase orders. While I don’t have anything to sell you here at infosec.engineering, I have been working feverishly for the past several months on really insightful and grand prediction for 2019. My hope is that this prediction will help organizations around the world to better prioritize their security spending and resources. After all, what good is reading a prediction (and all the attendant bloviation) if it can’t help you in some what? Well, on with the show:
Jerry’s cyber security prediction for 2019:
2019 will pretty much be like 2018.
Now, I know that you are probably reeling here, mentally recounting how this knowledge impacts your organization’s capital and hiring plans for 2019 and what you would have done differently had you known this a few months ago, but there is always time for course corrections.
The controls and processes that were important way back in 2018 continue to be important in 2019, and possibly even more so.
Strive to design robust IT environments supplemented by appropriate controls and monitoring. Hire and/or develop people who know what that means. Stay abreast of trends and test the effectiveness of your controls considering those trends and respond accordingly. Or don’t: I can always use new material to discuss on the Defensive Security Podcast.
My wife and I drove from our home in Atlanta to Panama City, Florida yesterday. It’s been approximately 2 months since Hurricane Michael ripped through this part of Florida. We are here to deliver Christmas presents we and our friends, neighbors, and coworkers donated.
I’ve seen the aftermath of fires, floods, and tornadoes many times. What I saw was beyond anything I have experienced. In one neighborhood we visited, nearly every house on the block had blue tarps on the roofs. The homeowner we spoke with said she felt lucky because the all of the houses on the next block were gone. Simply gone. I saw houses torn in half and entire forests of trees all snapped halfway up. Many buildings in the area have one or more exterior walls blown out, as if a bomb went off inside. This apparently happens when wind found a way in on the other side of the building. This damage this goes on for miles, and miles. I’ve been told that the area I visited, while bad, was not the worst hit by a long shot because it was on the western side of Michael’s eye, meaning that the winds blew out to sea. The area to the east not only had roughly the same winds, but also massive storm surge from the wind blowing the Gulf of Mexico inland.
From what I saw, older structures and trees suffered most, which is not terribly surprising. I was struck by the metaphor, albeit on a much different level of significance, that this situation has with information technology. Buildings designed and constructed 30 or 40 years ago are not designed to the same standards as those built along Florida’s coast are today. As storms pass through, the older structures can be destroyed, as many were in Hurricane Michael.
I see a similar story unfold with corporate IT. Older environments are often not designed to withstand the attacks leveled at them today. IT environments designed today will not withstand attacks in five or ten years. Upgrading these environments to withstand those attacks is often prohibitively expensive, at least as assessed prior to a devastating attack.
We seem to be in a situation where all but the most forward looking organizations wait until a storm comes to force the investment needed to modernize its IT. The challenge, as we repeatedly see, is that the ultimate victims harmed in such attacks is not the so much the organization itself, but rather the people whose data the organization holds. Because of that, the calculus performed by organizations seems to favor waiting, either knowingly or unknowingly, for the storm that forces structural enhancements to their IT environments.
This morning, I read this story on ZDNet about a report from Carbon Black. The report indicates that 72% of Carbon Black’s incident response group reported working on cases where the adversary destroyed logs. Generally, such stats aren’t particularly insightful for a variety of reasons, however it should be intuitive that an adversary has a vested interest in obscuring his or her illicit activities on a compromised system.
The CIS Top 20 Critical Cyber Security Controls control number 6 touches on this point by recommending systems send logs to a central log collector, but the intention is more on log collection for the purpose of aggregation and monitoring, such as with a SIEM, rather than for tamper resistance, though that is a likely side effect. Sending logs to a remote system is a good way to ensure proper logs exist to analyze in the wake of a breach. Also note that, in addition to deleting locally stored logs, many adversaries will disable a system’s logging service to prevent new logs from being stored locally or sent to a log collector.
Here are a few recommendations on logging:
Send system logs to a log collector that is not part of the same authentication domain as the systems generating the logs. For example, the SIEM system(s) collecting/monitoring logs should not be members of the same active directory domain as those that generate the logs.
Configure the SIEM to alert on events that indicate logging services were killed (if possible).
Configure the SIEM to generate an alert after a period of inactivity from any given log source.
 I need to write a blog post on the problems with reports that are based on surveys of a population. For now, I’d encourage you to read up on these problems yourself. It’ll make you a better consumer and a better person.
As security professionals, we are often put into difficult positions and face difficult choices. Sometimes the ethical path is uncomfortable or unpleasant. We need to hold ourselves to a high standard of ethics and honesty. We often have access to sensitive business information that can benefit us personally. A recent example of how this can go wrong are the criminal cases involving Equifax IT staff trading Equifax stock using non-public information about their massive data breach prior to the breach being announced.
Yesterday, I wrote about the importance of continued learning to stay relevant. A specific area I want to highlight is cloud computing. For better or for worse, cloud computing is the future of IT for most organizations, and as I wrote earlier, the cloud is not magical and brings new security challenges for us to solve. At the same time, cloud computing brings fundamentally new security and recovery capabilities that simply weren’t practical or possible in traditional infrastructure. We have an opportunity to not only help our organizations embrace this transformational technology, but also to make some substantial security enhancements as well. To do this, though, we need to deeply understand cloud computing.
Corporate IT seems to be evolving at an ever increasing pace. As security professionals who want to stay relevant and employed, it’s our duty to understand these changes. For example, most organizations are in the process of migrating IT to the cloud. However, as I’ve previously written, the cloud is not a magical place and requires an updated set of security skills and approaches.
I have no affiliation with O’Reilly, but my experience is that their Safari Books Online is about the most economical resource to help me stay current. It’s about $300 per year, but provides access to many thousands of technical books, video tutorials, and conference recordings. Videos of conferences such as Velocity, Strata, Fluent, The Artificial Intelligence Conference, OSCON, and others are a great way to get a view of the landscape, and you’ll find plenty of books to explore any particular area more deeply.
Another good place to get information is security conference videos. Adrian Crenshaw maintains an extensive list of recorded conference presentations on his website here: http://www.irongeek.com/
Anyone that has been in the IT security field for a while intuitively understands that there are concepts that apply regardless of technology, and that is certainly true, however understanding the technological and business shifts in the use of technology is necessary for us to stay relevant to our organizations. In this industry, we cannot afford to stand still. We must keep learning and growing, otherwise we will become marginalized and ignored in our organizations.
I spent the first nineteen days of National Cyber Security Awareness Month giving some hopefully useful ideas on improving security in your organization. I’m going to spend the remaining days writing about us as IT and security professionals.
To implement change, we need to be able to influence others, such as our managers, our executives, our CEO, or our board of directors. Without the ability to communicate and influence change, our good ideas for improving security remain just that: ideas.
The way we write is often the first thing people learn about us. It represents the first impression of those we need to communicate with. The ideas we are trying to advance are inextricably linked in the minds of our readers with the way the message is presented. Based on what I’ve seen throughout my career, writing is a challenge for many people and I strongly suspect many good ideas fell victim to overly casual writing and a herd of punctuation and spelling mistakes.
Writing is a skill that takes practice. Not just practice, but “deliberate practice” as Anders Ericsson describes in his book “Peak: Secrets from the New Science of Expertise”. Candidly, improving my ability to write is one of the main reasons I started the infosec.engineering blog and in particular, why I challenged myself to write a post every day for NCSAM.
In summary, focus on developing your writing ability. You will benefit professionally, and your organization will benefit more from your ideas.
Over the years, I’ve worked on investigating and cleaning up many breaches, and in nearly every single instance, the IT team that designed and managed the environment had no concept that their system could be exploited in the manner it was. Another commonality is that nearly all of those breaches resulted from a chain of weaknesses, some of which were consciously “accepted”. I argue that it is difficult to design a system resilient to attack if one does not know the tactics adversaries use, and it is equally difficult to assess risks without understanding how controls help block adversarial techniques.
For National Cyber Security Awareness Month, my hope is that people responsible for designing and assessing IT environment take time to learn about adversarial tools and techniques to design more robust environments and processes. This is, unfortunately, not a one-time event, though: techniques change over time, and we need to keep up with the latest trends.
The downside, I suppose, to this advice is that red team can be quite addictive and we’ll lose many competent IT people to the pen test puppy mill.
Zero trust networks are quickly becoming all the rage in the IT world. Building proper defenses into each endpoint and relying on strong authentication schemes seems intuitively right. I’ve had several recent discussions with smart people about how dull the old world of network-based firewalls which grant implicit permissions based on the network location of a particular device (i.e., inside vs. outside the firewall), which is just another way of saying that we should move toward zero trust.
But all this presumes that the “endpoints” can be secured. Many pieces of IT infrastructure, including switches, servers, firewalls, and many other devices contain administrative interfaces that have security that is on par with, or slightly worse than, the average home router. In the past few years, we’ve seen many, many problems with the lights-out management interfaces for various servers; we’ve seen a (so far) non-stop parade of authentication bypass/hardcoded passwords in Cisco devices; and we’ve seen many other devices using various badly configured or exploitable services running on these interfaces, like dropbear, libssh, and others.
These interfaces should NOT be exposed to untrusted networks. That, sadly, means that we need to continue on with architecting, at least to some extent, well thought out networks.
The concept of a “management network” is not new. I was first introduced to the concept over 20 years ago, and I suspect the idea was already old by that point. Remember that a management network, by definition, is a concentration of sensitive interfaces and user sessions that have administrative privileges. A lot has been written about the design of management networks by people much smarter than I am, but I’ll give some ideas/observations here:
Ensure that only authorized people and devices are able to connect to the management network
Monitor activities on the management network for indications of unauthorized devices or users
Keep the number of devices on the management network as small as possible – A one to one relationship would be optimal, but often impractical