Self-hosting E-Mail

I recently read this post by a member of the infosec.exchange community about someone’s struggles with self-hosting email. I first started hosting my own email in 1997 and I will admit, it’s been a titanic pain in the ass.

I’ve had two main issues:

  1. filtering out spam while allowing legitimate mail through
  2. ensuring mail is delivered, which is the topic of the post linked above

E-mail has become a vital utility for many people, my family included. If the wrong incoming mails are rejected, or outgoing email is not delivered, it can be a nightmare. THEY JUST WANT EMAIL TO WORK. Like turning on the faucet or a light.

A number of years ago, I gave in, to an extent, and “wrapped” my email around 3rd party providers: MXGuardDog filtered incoming email. MailGun delivered outgoing email. MailGun was indeed the only way I could reliably get email delivered to the likes of gmail.com from my own mail servers hosted in various cloud and VPS providers over the years.

Recently, I had an issue with spammers fabricating* email addresses to send from using the “infosec.exchange” domain. This caused me to set up SPF, DMARC, DKIM, and even DNSSEC for infosec.exchange.

At about the same time, I got a bill from MailGun – $15 for the most recent month due to the number of new accounts that had recently joined. This made me wonder how bad things would be without using MailGun. About 80% of signups on infosec.exchange use gmail.com addresses (protonmail is the next highest), so I removed MailGun from the mail flow and tried deliverability to gmaiol.com. And it worked! I removed SPF/DMARC/DKIM/DNSSEC records and tried again and found my mail was rejected.

I am sure that the large mail providers will blacklist my IP/domain at the drop of a hat should I be the source of spam, or even what it perceives to be spam, but it appears that they’re using some fairly straight forward standards that we can adopt pretty easily.

One last note: I am using Virtualmin to self-host my email, and while there are aspect of Webmin/Virtualmin that make me a crazy, setting up DMARC, DKIM, SPF, and DNSSEC is very simple with it.

*well, they were not totally fabricated – usernames in the fediverse look like email addresses, but they are not, and it appears that spammers are scraping websites collecting what appear to be legitimate email addresses to use as the “from:” address in their spam campaigns)

I Made a Spammy Mistake

There’s a password that I know and love, but I can’t use because it was stolen in the breach of some site long, long ago, and so it’s part of many dictionaries used for brute forcing passwords.

I run a bunch of cloud servers for various personal purposes and have things respectably locked down.  Ansible scripts perform initial hardening of new operating systems and keep those systems up-to-date with minimal (or no) intervention.  Root logins are disabled.  Logins require a yubikey and a password.

I recently set up a new server I rent through Hetzner.  It’s a beast for the price, by the way.  I installed the Webmin/Virtualmin combo, which makes managing multiple domains on the same system quite simple.

Yesterday, I started getting a flurry of delivery rejections and out of office notifications to one of my email addresses.  One of the rejections included a full, raw copy of the email that caused the rejection – sure enough, someone was sending spam through my shiny new server.

It took me a minute to realize what was happening.  Virtualmin uses the Postfix mail server, and Postfix is configured to use SASL for authenticating users.

Some enterprising and dedicated person had been brute force attempting SMTP auth sessions since about the time the server came online and hit on the combination of my local username and the previously mentioned bad password.  SASL doesn’t require yubikey auth, and I didn’t recognize that Virtualmin would authenticate local unix accounts and not just email accounts created through Virtualmin.  In hindsight, it’s obvious why it worked, because even the Virtualmin email IDs are added as unix users using the name@domain.tld format.

This really highlights the nuances that make securing environments challenging – there are many, many moving parts and nuances that can lead to problems.

The Role of Cyber Insurance in Security Operations

Lucky for me, Twitter was showing re-runs a few days ago and I saw a link to an article I missed last fall:

Why are cyber insurers incentivizing clients to invest in specific vendors?

It’s a quick and worthwhile read about a program called the “Cyber Catalyst” by insurance broker Marsh.  The program maintains a roster of cyber security products and services endorsed by various cyber insurance providers.  The criteria used to evaluate candidate products are as follows:

Participating insurers evaluated the solutions along six criteria:

  • Reduction of cyber risk: demonstrated ability to address major enterprise cyber risk such as data breach, theft or corruption; business interruption; or cyber extortion.
  • Key performance metrics: demonstrated ability to quantitatively measure and report on factors that reduce the frequency or severity of cyber events.
  • Viability: client-use cases and successful implementation.
  • Efficiency: demonstrated ability of users to successfully implement and govern the use of the product to reduce cyber risk.
  • Flexibility: broad applicability to a range of companies/industries.
  • Differentiation: distinguishing features and characteristics.

In a world full of security snake oil, an objective list like this is certainly helpful.  I am, at least, a little concerned at selection bias creeping into the list.  If mature organizations that manage security well tend to use a particular service, that service is possibly the unfair beneficiary of the good practices employed by the organizations that use those services.

But never mind that.  I made a commitment to myself that I would stop being yet another poo tosser that simply flings dung at people who are trying to help advance the state of security, and instead actually offer constructive ideas.

The missing pieces are people and processes.  These are hard to objectify, but it seems within the realm of possibility to create a similarly endorsed set of processes and even types of skills IT and security staff known to lead to good outcomes.  I can already hear people lining up to explain who I am wrong, but hear me out: I can all but guarantee two things about the Cyber Catalyst list:

      1. Any given organization can achieve good security outcomes without using any of the Cyber Catalyst services
      2. Any given organization that does use the Cyber Catalyst service can have a bad outcome

Much comes down to how any given organization manages risk, operates IT, and so on.  The Cyber Catalyst provides a data point for organizations looking to invest in some new security tool or service.  It doesn’t guarantee success.  The situation with people and processes is similar.  Given an inventory of “endorsed processes”, organizations looking to, for example, replace it’s change management, vulnerability management, or threat hunting processes can contemplate using exemplars in the endorsed process list. There are many frameworks out there already, from COBIT to NIST to ISO27k, but my view is that those, at best, would serve as a framework to organize the endorsed processes, since they don’t themselves, provide substantial information on how to actually operationalize them.

People could be similar.  It seems possible to, in rough terms, identify a set of skills that organizations that defend themselves successfully have on staff.  If that becomes successful, and it is “open”, it could serve as a list of skills to develop for that individuals looking to enter or advance in the IT security field.

 

Requiring Periodic Password Changes Is (Probably) Still A Good Idea

There is growing momentum behind dropping the periodic password expiration requirement – generally 90 days.  The idea first gained widespread credibility when NIST updated SP 800-63 way back in 2017, advising against requiring password expiration policies.  In recent times, most of the security thought leaders seem to consider password change policies to be an outdated, cyber horse and buggy remnant of times gone by.  This week, Microsoft releases a blog post stating their intention to drop password the expiration requirement from the Windows 10 and Windows Server security baseline.

In concept, I agree with this guidance.  Troy Hunt gives a great enumeration of the many reasons here.  In practice, I have grave concerns with this approach.  I do want to make it clear that I think everyone should be using a password manager with strong, unique passwords for each service, and even better, using multi-factor authentication everywhere its supported.  If this were the default condition, I would have no objections to dropping password expiration requirements.  But alas, this is not the world we live in.

People are mentally lazy.  I don’t mean this in a derogatory way; I mean that our brains are hard wired to minimize the amount of effort we apply to any given task.  The net impact of dropping password expiration requirements is that some number of staff will adopt a single long and complex password that they will use everywhere.  As we have seen repeatedly and consistently over many years, internet-based services are a) terrible at storing passwords in a secure manner; and b) terrible at keeping their authentication database secure.  This inevitably sets up a situation where I create a complex password for work, with no intention or requirement to ever change it, then also use it on my favorite horoscope website (everyone needs a methodology for making security related decisions, right?).  My horoscope website is compromised and now my work password that never changes (probably along with my work email address) is out in the wild.

My main objection to dropping password expiration requirements is that it enables employees to use a “work” password everywhere, whereas it is generally infeasible to do so with a password expiration in place.  I have many other tangentially related concerns, many of which people who work in incident response will recognize: adversaries in a network are able to collect non-expiring credentials from obscure places, like old backups and documentation, and so on.  In my experience, these passwords are often already a problem because people will simply iterate some prepended or appended element of passwords (Password1, Password2, etc), which can often be easily guessed by a targeted intruder.

Like many good ideas (looking at you, Active Directory), the benefits arise from a certain ecosystem being in place.  Organizations often want to embrace the aspects of a new paradigm that they like, but not the parts that are inconvenient or expensive (see: my disdain for Active Directory).  There are ways to help mitigate this concern, such as periodically  comparing recently breached passwords to those used by employees and immediately disabling or changing any matches found.  However, much like properly securing Active Directory, nearly no one does this, instead taking the “quick win” of disabling password expiration because that is now industry best practice.

 

The Road To Hell Is Paved With Automation And Orchestration Tools

Automation and Orchestration tools have helped IT focus more on creating value to customers and users, and less on keeping the lights on.  These automation and orchestration tools, combined with the cloud-based infrastructure enables streamlined workflows, scalability, and performance we come to expect, but they also create new concentrations of risk in our infrastructures.

IT has long operated in a mode of prioritizing security activities: generally “production” systems are prioritized over “lab” systems, and for good reason.  Lab systems were generally not mission critical to an organization and were typically squirreled away in the bowels of an organization’s network.

I often see organizations employing devops continuing to focus on protecting the “production” environment: the business applications customer-facing web applications, and so on.  In fact, automation, orchestration, and cloud infrastructure creates new and innovative security capabilities, such as spinning up new servers that are patched, tested, and then rotated in to replace the existing, unpatched servers, which are summarily destroyed upon being replaced. Unfortunately, I also see these orchestration and automation tools treated like legacy “lab” systems.

Quite the contrary, these automation and orchestration systems should be treated with at least the same level of diligence as the platforms they manage, hopefully for obvious reasons.

A public example of what can happen is the recent breach of Matrix.org’s Jenkins server.  Note: the Matrix.org team should be commended for their transparency and speed in responding to this incident.  I do not claim to know the reasons that their Jenkins server had unpatched vulnerabilities or the details of how those vulnerabilities were exploited, only that this situation aligns with my observations from other organizations.

Orchestration and automation tools are an very attractive target for our adversaries since they are a) generally not well protected or monitored, and b) enable rapid attacks on an organization’s most important and sensitive systems.  I implore you to work with your respective IT teams to ensure that these tools are managed and protected appropriately.  And yes, Active Directory is one of these tools.

 

Surviorship Bias and Infosec

A great tweet in my feed recently tied surviorship bias with IT security:

Books on cognition and cognitive biases often reference this somewhat famous bit of thinking by a man named Abraham Wald.  If you are interested, you can read more details about Wald’s observation in situation here.

In this case, planes with certain areas damaged never made it back, and so no (or few) planes analyzed had damage to those areas.  All of the damage observed happened on parts of the plane that could be damaged while allowing the plane to return.

Nassim Taleb called this “silent evidence” in his book “Black Swan”.  Arthur Conan Doyle paid a bit of homage to this phenomenon in to the plot of a Sherlock Holmes story where someone was murdered, but upon questioning, the neighbors didn’t recall hearing the victim’s dog barking.  The absence of a barking dog in this Homles’ story indicated that the perpetrator was someone the dog knew well.

As Florian points out in his tweet, we often react to the data we have, rather than being aware that we are unaware of missing data when we form a hypothesis or make a recommendation, such as investing in a new WAF in the face of a dearth of logs indicating attacks on a web server.  It’s a great point: the decision to buy a WAF is made without the benefit of knowing which of the myriad other attack vectors are being used, possibly successfully, against the organization, because there is no log information.

This raises a complex question: how do we know what we don’t know?  Ultimately, we as security people, have to take action and don’t have the luxury of philosophizing on the nature of uncertainty.  We must make decisions under uncertainty, often quickly.  What to do?

Here is the way I would approach this: I may have logs indicating persistent attacks on our web site, but the question I would ask is whether we have evidence that any of those attacks are successful, or will likely be successful.  There’s nothing surprising about a public web site being attacked – everything that has an internet routable IP address is constantly being scanned, probed, infected, and/or compromised.  Since I do not have unlimited money to spend on security, I have to assess whether the web server is the most pressing thing to address.  I need to consider what data I’m missing.  In this case, I’m missing all of the other data about all other types of attacks that might be happening.  Am I able to detect those attacks?  If not, would I know if any were successful?

When approaching a problem like this, it’s good to start with the basics.  If I am not able to detect various types of attacks, it’s tough to prioritize where to implement control enhancements, therefore a good place to start is with improving visibility using tools such as EDR, antivirus, IDS, and so on, depending on the situation.  It’s been my experience that many organizations are simply unable to detect successful attacks, and so live blissfully ignorant of their data walking out the door.  The act of enhancing visibility into such attacks often identifies serious problems that need to be addressed quickly.  It’s at this point that I can compare the importance of a WAF against some other control.

Enhancing visibility doesn’t (necessarily) lead to improved controls as quickly as, say, running out and buying a WAF, but the visibility enhancements will help with prioritization and with building a business case for funding additional security controls.  The investment in visibility is not throw-away: even after new preventive controls are in place, the ability to detect malicious activity is still vitally important, and can help refine other controls or identify shifts in adversarial tactics.

One problem I’ve experiences with improving visibility during my career is that, at least for a period of time, the perceived security of a company seems to take a serious turn for the worse because I’m detecting things that were happening previously, but which we simply didn’t know was happening.  Any time we engage in such a program, it’s important to set expectations appropriately.

For more reading on this, I recommend the following books:

 

Thank you and My #infosec Hopes For 2019

I already published my ground-breaking infosec predictions for 2019, but I also want to say thank you to all the great people that I’ve had the privilege to work with and have met, even if only through social media.  I appreciate every one of you.

One of the things that I’ve come to learn about human behavior is that we tend to more consistently accomplish our goals if we make a commitment to others.  Here are the items I want to accomplish, or at least get started, in 2019:

  1. Reinvigorate the Defensive Security Podcast.  I’ve let myself become increasingly busy and the frequency of new episodes has suffered as a result.
  2. Develop my interest and passion about the intersection of behavioral economics/behavioral psychology and information security into something meaningful: a blog, a podcast, a book.  I don’t know what form this will take yet, but I believe there is a significant opportunity to advance the field of information security.
  3. Grow my previous efforts to help people enter the information security field.  I don’t know what form this will take yet, but in the past I’ve provided e-books and tried to match people looking for work with companies looking for employees, mostly via Twitter.
  4. Learn much more about cloud computing, SDN, and how to incorporate security and resiliency into these environments, as well as how to capitalize on the new capabilities these environments provide for security and resiliency.
  5. Develop an idea I have had around “componentizing” IT infrastructure designs, in a similar manner to “design patterns” for software development.  I wrote a bit about this in the past.  I don’t know what form this will take – maybe a wiki or something similar.
  6. Deliver a presentation, without completely panicking, at a conference.

Happy New Year, my friends!

Predictions for 2019

This is the time of year where all security vendors take stock of what they need to sell in the coming months and produce a litany of terrifying predictions that can be thwarted if you, the hapless victim, will start writing purchase orders.  While I don’t have anything to sell you here at infosec.engineering, I have been working feverishly for the past several months on really insightful and grand prediction for 2019.  My hope is that this prediction will help organizations around the world to better prioritize their security spending and resources.  After all, what good is reading a prediction (and all the attendant bloviation) if it can’t help you in some what?  Well, on with the show:

Jerry’s cyber security prediction for 2019:

Tumisu / Pixabay

2019 will pretty much be like 2018.

Now, I know that you are probably reeling here, mentally recounting how this knowledge impacts your organization’s capital and hiring plans for 2019 and what you would have done differently had you known this a few months ago, but there is always time for course corrections.

The controls and processes that were important way back in 2018 continue to be important in 2019, and possibly even more so.

Strive to design robust IT environments supplemented by appropriate controls and monitoring.  Hire and/or develop people who know what that means.  Stay abreast of trends and test the effectiveness of your controls considering those trends and respond accordingly.  Or don’t: I can always use new material to discuss on the Defensive Security Podcast.

Opportunity in Adversity

My wife and I drove from our home in Atlanta to Panama City, Florida yesterday.  It’s been approximately 2 months since Hurricane Michael ripped through this part of Florida.  We are here to deliver Christmas presents we and our friends, neighbors, and coworkers donated. 

I’ve seen the aftermath of fires, floods, and tornadoes many times.  What I saw was beyond anything I have experienced.  In one neighborhood we visited, nearly every house on the block had blue tarps on the roofs.  The homeowner we spoke with said she felt lucky because the all of the houses on the next block were gone.  Simply gone.  I saw houses torn in half and entire forests of trees all snapped halfway up.  Many buildings in the area have one or more exterior walls blown out, as if a bomb went off inside.  This apparently happens when wind found a way in on the other side of the building.  This damage this goes on for miles, and miles. I’ve been told that the area I visited, while bad, was not the worst hit by a long shot because it was on the western side of Michael’s eye, meaning that the winds blew out to sea.  The area to the east not only had roughly the same winds, but also massive storm surge from the wind blowing the Gulf of Mexico inland. 

From what I saw, older structures and trees suffered most, which is not terribly surprising.  I was struck by the metaphor, albeit on a much different level of significance, that this situation has with information technology.  Buildings designed and constructed 30 or 40 years ago are not designed to the same standards as those built along Florida’s coast are today.  As storms pass through, the older structures can be destroyed, as many were in Hurricane Michael. 

I see a similar story unfold with corporate IT.  Older environments are often not designed to withstand the attacks leveled at them today.  IT environments designed today will not withstand attacks in five or ten years.  Upgrading these environments to withstand those attacks is often prohibitively expensive, at least as assessed prior to a devastating attack.

We seem to be in a situation where all but the most forward looking organizations wait until a storm comes to force the investment needed to modernize its IT.  The challenge, as we repeatedly see, is that the ultimate victims harmed in such attacks is not the so much the organization itself, but rather the people whose data the organization holds.  Because of that, the calculus performed by organizations seems to favor waiting, either knowingly or unknowingly, for the storm that forces structural enhancements to their IT environments.


Thoughts About Counter-Forenics and Attacks on Logs

This morning, I read this story on ZDNet about a report from Carbon Black.  The report indicates that 72% of Carbon Black’s incident response group reported working on cases where the adversary destroyed logs.  Generally, such stats aren’t particularly insightful for a variety of reasons[1], however it should be intuitive that an adversary has a vested interest in obscuring his or her illicit activities on a compromised system.

The CIS Top 20 Critical Cyber Security Controls control number 6 touches on this point by recommending systems send logs to a central log collector, but the intention is more on log collection for the purpose of aggregation and monitoring, such as with a SIEM, rather than for tamper resistance, though that is a likely side effect.  Sending logs to a remote system is a good way to ensure proper logs exist to analyze in the wake of a breach.  Also note that, in addition to deleting locally stored logs, many adversaries will disable a system’s logging service to prevent new logs from being stored locally or sent to a log collector.

Here are a few recommendations on logging:

  1. Send system logs to a log collector that is not part of the same authentication domain as the systems generating the logs.  For example, the SIEM system(s) collecting/monitoring logs should not be members of the same active directory domain as those that generate the logs.
  2. Configure the SIEM to alert on events that indicate logging services were killed (if possible).
  3. Configure the SIEM to generate an alert after a period of inactivity from any given log source.

 

[1] I need to write a blog post on the problems with reports that are based on surveys of a population.  For now, I’d encourage you to read up on these problems yourself.  It’ll make you a better consumer and a better person.