I was reading an article earlier today called “Why Hackers Attack Healthcare Data, and How to Protect It” and I realized that this may well be the one-thousandth such story I’ve read on how to protect PHI. I also realized that I can’t recall any of the posts I’ve read being particularly helpful: mostly containing a few basic security recommendations, usually aligned with the security offerings of the author’s employer. It’s not that the authors of the posts, such as the one I linked to above are wrong, but if we think of defending PHI as a metaphorical house, these authors are describing the view they see when looking through one particular window of the house. I am sure this is driven by the need for security companies to publish think pieces to help establish credibility with clients. I’m not sure how well that works in practice, but it leaves the rest of us swimming in a rising tide of fluffy advice posts proclaiming to have the simple answer to your PHI protection woes.
I’m guessing you have figured out by now that this is bunk. Securing PHI is hard and there isn’t a short list of things to do to protect PHI. First off, you have to follow the law, which prescribes a healthy number of mandatory, as well as some addressable, security controls. But we all know that compliance isn’t security, right? If following HIPAA were sufficient to prevent leaking PHI, then we probably wouldn’t need all those thought-leadership posts, would we?
One of the requirements in HIPAA is to perform risk assessments. The Department of Health and Human Services has a page dedicated to HIPAA risk analysis. I suspect this is where a lot of organizations go wrong, and probably the thing that all the aforementioned authors are trying to influence in some small way.
Most of the posts I read talk about the epidemic of PHI theft, and PHI being sold in the underground market, and then focus on some countermeasures to prevent PHI from being hacked. But let’s take a step back for a minute and think about the situation here.
HIPAA is a somewhat special case in the world of security controls: they are pretty prescriptive and apply uniformly. But we know that companies continue to leak PHI. We keep reading about these incidents in the news and reading blog posts about how to ensure our firm’s PHI doesn’t leak. We should be thinking about these incidents are happening to help us figure out where we should be applying focus, particularly in the area of the required risk assessments.
HHS has great tool to help us out with this, lovingly referred to as the “wall of shame”. This site contains a downloadable database of all known PHI breaches of over 500 records, and there is a legal requirement to report any such breach, so while there are undoubtedly yet-to-be-discovered breaches, the 1800+ entries give us a lot of data to work with.
Looking through the data, it quickly becomes apparent that hacking isn’t the most significant avenue of loss. Over half of the incidents arise from lost or stolen devices, or paper/film documents. This should cause us to consider whether we encrypt all the devices that PHI can be copied on: server drives, desktop drives, laptop drives, USB drives, backup drives, and so on. Encryption is an addressable control in the HIPAA regulations, and one that many firms seemingly decide to dance around. How do I know this? It’s in right there in the breach data. There are tools, though expensive and onerous, that can help ensure data is encrypted wherever it goes.
The next most common loss vector is unauthorized access which includes misdirected email, physical mail, leaving computers logged in, granting excessive permissions, and so on. No hacking here*, just mistakes and some poor operational practices. Notably, at least 100 incidents involved email; presumably misdirected email. There are many subtle and common failure modes that can lead to this, some as basic as email address auto-completion. There likely is not a single best method to handle this – anything from email DLP system quarantining detected PHI transmissions for a secondary review, to disabling email address auto-complete may be appropriate, based on the operations of the organization. This is an incredibly easy way to make a big mistake, and deserves some air time in your risk assessments.
The above loss types make up roughly 1500 of the 1800 reported breaches.
Now, we get into hacking. HHS’ data doesn’t have a great amount of detail, but “network server” accounts for 170 incidents, and likely make up the majority of the situations we read about in the news. There are 42 incidents each involving email and PCs. Since there isn’t a lot of detail, we don’t really know what happened, but can infer that most PCs-related PHI leaks were from malware of some form, and most network server incidents were from some form of actual “hacking”. The Anthem incident, for example, was categorized as hacking on a network server, though the CHS breach was categorized as “theft”.
Dealing with the hacking category falls squarely into the “work is hard” bucket, but we don’t need new frameworks or new blog posts to help us figure out how to solve it. There’s a great document that already does this, which I am sure you are already familiar with: the CIS Top 20 Critical Security Controls.
But which of the controls are really important? They all are. In order to defend our systems, we need to know what systems we have that contain PHI. We need to understand what applications are running, and prevent unauthorized code from running on our devices storing or accessing PHI. We need to make sure people accessing systems and data are who they say they are. We need to make sure our applications are appropriately secured, and our employees are trained, access is limited properly, and all of this is tested for weakness periodically. It’s the cost of doing business and keeping our name off the wall of shame.
* Well, there appears to be a small number of miscategorized records in the “theft” category, including CHS, and a few others involving malware on servers.
You must be logged in to post a comment.