Lessons From The Neiman Marcus Breach

Bloomberg released a story about a forensic report from Protiviti detailing the findings of their investigation into the Neiman Marcus breach. There are very few details in the story, but what is included is quite useful.

First, Protiviti asserts that the attackers who breached Neiman do not appear to be the same as those who breached Target. While this doesn’t seem like a major revelation to many of us, it does point out that there are numerous criminals with the ability to successfully pull off such attacks. And from this, we should consider that these “sophisticated” attacks are not all that hard to perpetrate given the relative reward.

Next, Protiviti was apparently unable to determine the method of entry used by the attackers. While that is unfortunate, we should not solely focus on hardening our systems against initial attack vectors, but also apply significant focus to protecting our important data and the systems that process and store that data. Criminals have a number of options to pick from for initial entry, such as spear phishing and watering hole attacks. We need to plan for failure when we design our systems and processes.

The activities of the attackers operating on Neiman systems apparently created nearly 60,000 “alerts” during the time of the intrusion. It is very hard to draw specific conclusions because we don’t actually know what kind of alerts are being referenced. I am going to speculate, based on other comments in the article, that the alerts were from anti-virus or application white listing:

…their card-stealing software was deleted automatically each day from the Dallas-based retailer’s payment registers and had to be constantly reloaded.

…the hackers were sophisticated, giving their software a name nearly identical to the company’s payment software so that any alerts would go unnoticed amid the deluge of data routinely reviewed by the company’s security team.

The company’s centralized security system, which logged activity on its network, flagged anomalous behavior of a malicious software program though it didn’t recognize the code itself as malicious or expunge it, according to the report. The system’s ability to automatically block the suspicious activity it flagged was turned off because it would have hampered maintenance, such as patching security holes, the investigators noted.

The 59,746 alerts set off by the malware indicated “suspicious behavior” and may have been interpreted as false positives associated with legitimate software.

However, some of these comments are a bit contradictory. For instance:

payment registers and had to be constantly reloaded

And

it didn’t recognize the code itself as malicious or expunge it

In any event, a key take away is that we often have the data we need to detect that an attack is underway.

Next is a comment that highlights a common weakness I covered in a previous post:

The server connected both to the company’s secure payment system and out to the Internet via its general purpose network.

Servers that bridge network “zones”, as this Neiman server apparently did, are quite dangerous and exploitation of them tends to be one of the common traits of many breaches. Such systems should be eliminated.

Finally, a very important point from the story to consider is this:

The hackers had actually broken in four months earlier, on March 5, and spent the additional time scouting out the network and preparing the heist…

This should highlight for us the importance of a robust ability to detect malicious activity on our network and systems. While some attacks will start and complete before anyone could react, many of these larger, more severe breaches tend to play out over a period of weeks or months. This has been highlighted in a number if industry reports, such as the Verizon DBIR.

Leave a Reply