Cyber Security Awareness Month 2017

Cyber Security Awareness Month is a time when many organizations run their internal security awareness programs for employees, and the time that those of us in the security industry are encouraged to help raise awareness of cyber threats with friends and family using tools like SANS’ most excellent OUCH! newsletter.  While I think those are great things to do, I propose we consider some new traditions for CSAM.  We should apply effort to raising awareness in populations that are more significant points of leverage than rank and file employees, in order to maximize security improvements.  These are communities that we typically do not consider the primary targets for CSAM, such as:

  • IT staff, including developers, architects, engineers, network, database, and systems admins
  • Infosec staff
  • Internal audit staff

In my experience, the most common root causes and significant contributing factors of security incidents is poorly designed IT environments that are designed, implemented and operated by people that don’t understand how technology can be, and indeed is being, abused. Let’s consider phishing for a moment. While training employees to recognize phishing emails is beneficial, it should be intuitive that, over time, people WILL fall for phishes occasionally, and blaming an ensuing breach on an employee’s failure to recognize a phish is not helpful. In addition to training employees to recognize phishing emails, we should also provide ongoing training to the IT staff that design and operate the mail, workstation, and network environments to understand these how these attacks work, new techniques, prevention mechanisms and detection strategies. Phishing, of course, is just an example and there is much to learn and stay on top of across the infosec spectrum.

I’m not aware of any such training that is readily available in the format I’m describing, so this is an aspirational idea, not necessarily something we can run out and implement tomorrow.  A good, and usually free, source for this are security conference videos.  As much as I like them, though, they are not the mot efficient means of getting an overview across a broad set of topics.  I do suspect we can tailor the content to roles.  For instance, developers and network administrators likely won’t benefit from some of the same types of information on attacks.

Training needs to be ongoing, too. Tactics and threat actors evolve. The continuing education model of certifications seems like a good avenue for keeping people accountable, however the things counted as “continuing education” can be more than a bit dubious. Another trap to watch out for is “training” provided by infosec vendors, such as webinars, that are effectively just marketing vehicles for the vendors’ offerings. Remember that vendors are in the business to sell, and part of doing that is to convince us that a) we have a problem that they can solve and b) they can solve our problem better/faster/cheaper than anyone else.

I am not proposing that we make these groups of people experts in offensive security tactics, but rather that we provide a periodic, up-to-date overview of how adversaries use those tactics so that our employees will be able to make more informed decisions when performing their jobs, in the same way that we expect regular awareness training to help an employee identify a phishing email.

Actually Preventing Ransomware

This post is in response to a preponderance of unhelpful advice posts running around on the Internet with click-baity titles, such as “Top X ways to prevent ransomware in YOUR business!”.  This advice is intended for organizations rather than home users.

A common factor that leads to ransomware problems is failing to implement appropriate controls because of some perceived trade off or ignorance of the risks.  I contend that organizations that make such trade offs are not actually interested in preventing ransomware.  Many organizations and “thought leadership” articles approach ransomware prevention using a combination of awareness training, patching, up-to-date antivirus, and data backups.  But those of us who have been around the ransomware block a few times know that none of these things prevent modern ransomware attacks.

This list of controls is narrowly focused on ransomware – there are many other good controls I don’t mention below.

I ran a most unscientific poll on twitter, and the results are a bit disturbing.  I did not expect the majority of security people responding to state that training can stop most or almost all ransomware.  In retrospect, the question may be badly worded.  Possibly by volume of ransomware training may help, but I don’t expect that to be the case if we consider the spectrum of different attack techniques and malware samples.

At a high level, we need to implement controls that:

  1. Block ransomware from getting to workstations.
  2. Block the execution of ransomware once it gets onto a workstation.
  3. Block propagation of the ransomware once it gets onto a workstation, assuming it executes.
  4. Mitigate the damage cause by ransomware once it gets onto a workstation, assuming it executes.

Controls to prevent ransomware from getting to workstations:

  • Block MS Office file attachments that contain macros (i.e., .docm, .xlsm) and other common executable and script file types in email.
  • Implement web filtering and block access to known malicious sites, sites for which there is little value for the business, and uncategorized sites.
  • Block non-company email and webmail using web filtering and blocking of common email ports at the firewall.
  • Implement ad blockers on web browsers.

Controls to prevent ransomware from executing on workstations:

  • Provide quality ongoing phishing training to employees, paired with phishing simulations.
  • Flag incoming email from the Internet as [external] or similar in the subject line, and incorporate this into the security training program.
  • Apply operating system and application patches promptly after ensuring the patch will not do more harm than good.
  • Ensure that all applications on used on workstations are being managed (i.e., someone is responsible for deploying patches).
  • Disable MS Office macros on workstations.
  • Associate undesirable file types, such as .js, et al, with notepad.exe.
  • Implement application whitelisting to block execution of unknown applications.

Controls to limit propagation of ransomware:

  • Disable SMBv1.
  • Block inbound SMB and RDP connections to workstations.
  • Block accounts with elevated privileges from logging into workstations.
  • Remove local administrator rights from general user accounts.
  • Implement port isolation on the network such that workstations cannot communicate with each other.
  • Use LAPS to configure a unique local administrator account password on each workstation.

Mitigating the damage caused by ramsomware:

  • Monitor file servers and deny permissions to accounts that create particular file types associated with ransomware-locked files.
  • Enable volume shadow copies.
  • Use a backup solution that supports file versioning on workstation and servers.  Periodically test backups with a restore.  Script an alert to detect if backups stop running for some reason.
  • Minimize the number of mapped drives on workstations.
  • Restrict write access on network drives to those people who need to be able to save new versions of files.

 

I know there are many other good ideas that should be added to this list.  I would be grateful if you can add them below as a comment.  I am thinking about putting this on a publicly editable wiki, with the intention of providing an objective set of controls, and some explanation on the how and why for each.

 

 

Bloodletting and Ransomware

I just read this post on “How to protect your network from ransomware.” The post doesn’t contain advice that will prevent modern ransomware attacks, though. I do not intend to pick on the author or Network World; I know they are trying to help, and the advice is certainly sound general security hygiene.

Until about a hundred years ago, bloodletting was a pretty common medical treatment for many kinds of diseases. Looking back at it now, the practice is pretty disturbing and counterproductive. But at the time, the treatment appeared to work great. People were treated and either the bloodletting worked (i.e., they recovered) or it didn’t work (they died). Patients that recovered were held as evidence the treatment worked, and patients that died were simply considered to have been too far gone for anything to have helped.

I see a lot of the same faulty logic in security advice. No ransomware outbreak means the advice worked, and an outbreak is attributed to some issue so extraordinary that no advice could have helped. Attacks that successfully trick our fully phishing-awareness trained staff and evade our antivirus applications are so cutting-edge that nothing we could have done would have prevented it anyway. Right?

Why don’t we write guides that contain advice on actually preventing ransomware attacks?

Major Cyber Attacks As A Source of Renewal

It is pretty well accepted that, while devastating, some types of natural disasters, such as forest fires, have the effect of allowing new life to take root and flourish.

I’ve often lamented how difficult it can be, particularly in larger organizations, to make significant security enhancements because of the costs involved and requisite interruption of business operations.  We’ve now witness a number of pretty high profile cases where the IT environments of organizations were all but destroyed and had to be rebuilt, such as with Saudi Aramco and Sony, and most recently with NotPetya’s effect on companies around the world.  I am not intending to minimize the devastation to these companies, however these types of events seem similar to the forest fire analogy, providing an opportunity in the midst of disaster to make strategic improvements.

I wonder, though, can an organization take meaningful advantage of this bad situation?  In the aftermath of such an event, the priority is almost certainly on restoring functionality as quickly as possible, and the straightest line to get there is likely to implement things as they previously work, likely with some slight adjustments to account for the perceived cause of the problems.  Many organizations have disaster recovery and business continuity plans, and some of those plans are starting to incorporate the concept recovering from a “cyber disaster”, however those plans all deal with getting back to operations quickly through recreating the existing functionality.  I am thinking that such plans may benefit from keeping a punch list of “things we would do differently if we could start over”.  We all have those lists, if only in our heads, and the utility documenting such a list isn’t limited to just these mega-bad recovery scenarios – they are also useful in normal planning cycles, technology refreshes, and so on.

What do you think?

The Trouble With Applying Kinetic Metaphors to Cyber

I was having a good debate with some of my twitter friends that started off with this tweet:

At one point in the discussion, @JeffWilsontech brought up safes:

That got me thinking.  In the info/cyber security world, we draw a lot of comparisons to the physical world.  Who hasn’t seen this image in a security presentation?

As humans, we learn by building associations with things we already know.  Kinetic-space security concepts are relatively easy to grasp and there are intuitive relationships between security in cyber space and security in kinetic space.  For example, a firewall is like the walls of a building, and openings through the firewall are like doors and windows.

My observation is that the intuitiveness of this analogy can lead us astray when we think about IT security defenses, though.  For example, consider safes as Jeff mentioned above.  Commercial safes have a rating that denotes the amount of time it will resist attacks from picks, mechanical and electrical tools – usually less than an hour.  Attacks on safes generally involve the adversary, even a knowledgeable one, to run through a time consuming process to break into the safe.  The logical equivalent to think of here is some encrypted data, not network or system security.  To consider an equivalent attack on a safe, we would need to think about an attacker, who resides anywhere in the world, to almost instantly teleport the safe to the business end of a 1000T hydraulic press in an evil lair’s machine shop.  The safe is immediately popped open by the press without regard for the security rating.

In the case of the walls being like a firewall, the building’s doors could all be locked from the outside, i.e. nothing is allowed in via the firewall.  However, people from all over the world are able to watch people coming and going from the building and able to hide in the brief case of one of the building’s inhabitants while he is out for a walk.  Once the unsuspecting person is back inside the building, the intruders surreptitiously exit the brief case and now are able to come and go from the building as she pleases.

These are pretty dull examples that I suspect are intuitive to most of you.  However, I see many people in the industry drawing parallels to kinetic-space constructs such as insurance, building codes, fire codes, and electrical codes as a means to improve security through regulation.  I am in the camp that security will not generally improve unless there is a regulatory obligation to do so.  The free market simply does not provide an incentive for organizations to either produce secure products or design and operate secure applications, systems, and networks.  The challenge with this approach is that it’s fundamentally incompatible with the current philosophy of IT systems design and operation and the threats to them.  Fire codes, for example, define a pretty objective set of requirements that are able to address a broad swath of risks: sprinklers work pretty well at putting down a fire (almost) regardless of the cause.  Even so, there is a structured   Electrical codes seem conceptually similar to IT: anyone with some amount of electrical knowledge can wire up a building, similar to how anyone with some IT abilities can create an IT system.  From here, though, the two diverge.  There is a pretty rigid set of electrical standards, typically based on the National Electric Code.  Municipal fire and electric codes do not allow for much “innovation” in the way that is practiced in the IT world.

The “Underwriter’s Laboratories” approach to cyber security seems intuitively sensible, but we have to remember that it will necessarily have a negative impact on innovation in the IT product market, which I know many will not see as a bad thing, but it also does not address the consumer/integrator side of the equation, which I argue is where much of the problem comes from.  Then there are complicated questions about things like open source software, and the raspberry pi, and so on.

Pairing an “Underwriter’s Laboratories” approach with a “National Cyber Code” would seem to provide a more secure world, but it will come with a pretty steep cost.

A significant headwind against this approach is, well, the whole economy of existing producers and consumers of IT products and services, consulting companies, integrators, and so on.  We can’t discount the influence these entities have on the regulatory process, even if to varying degrees in different countries.  Even in countries with very progressive data protection laws, we can see the desire for regulations to provide latitude in IT.  The GDPR, which is my view the only regulation with the *potential* to drive major changes in security, is quite abstract in its data security obligations:

  1. Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

    1. the pseudonymisation and encryption of personal data;
    2. the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
    3. the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
    4. a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
  2. In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

  3. Adherence to an approved code of conduct as referred to in Article 40 or an approved certification mechanism as referred to in Article 42 may be used as an element by which to demonstrate compliance with the requirements set out in paragraph 1 of this Article.

  4. The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.

The problem with wording such as this is that it doesn’t define what is actually needed to protect data – that is left to the data controller and data processor to decide, presumably so they can each continue to “innovate” in their respective IT programs.  Now, I also know one of the objectives of this particular wording is to help the regulation remain relevant over time.  A regulation that is too specific may quickly become out of date due to the emergence of new threats.

Finally, another challenge I see with the “UL + NEC” approach is that the prescriptive IT regulations that do exist, such as HIPAA and PCI DSS*, haven’t proven to be very effective at protecting data since organizations that have been certified as compliant do end up being breached occasionally, but then again we see structures with NEC compliant electrical systems and UL reviewed appliances burn down periodically, too.

It seems to me that another component to the solution, at least to the question of protecting consumer data is to limit the use of such data, as the GDPR also does, and also to reduce the value of that data.  In economic terms, hitting both the supply side (data hoarding companies) and the demand side (data thieving criminals).  For example, credit card data is valuable because it can be used to steal money.  What if we changed the way credit worked to not rely on a static number that can be stolen?  Clearly that can’t work for everything, but sucking the value out of stealing personal data would go a long way.

* Yes, I know PCI DSS is not a regulation.

 

Non-traditional Sources of Vendor Risk

NotPetya seemed to be a pretty rude awakening for some organizations – realizing that vendors and business partners previously thought to be “benign” can be the source of significant risk.  This should not be surprising after the Target and Home Depot breaches.  The initial distribution mechanism of NotPetya was through auto-updates to business software.  We know that NotPetya propagated on a network using a few different tactics.  A number of organization were infected with no apparent connection to the original distribution mechanism, meaning that the infection very likely propagated through network connections between organizations.

One of the fundamental challenges we seem to have in cyber security is a lack of imagination – imagination for how attacks can happen.  As is the case in some many things, we seem to be stuck fighting yesterday’s problem.  After Target and Home Depot, we started interrogating our HVAC vendors pretty hard, presumably DOUBLING or TRIPLING the number of security related questions on our vendor management questionnaires.  Possibly the issue here is that each organization needs to learn the lesson for its self and the situation really is improving in aggregate, but I am growing cynical in old age.  It seems that we are not hitting the problem head, instead choosing to “accept risks” that we choose not to understand.

Certainly a big headwind is the extreme complexity of IT environments, though I am not sure that means we should just default to “well, we followed ISO27k” (aka. sticking head-in-sand).  It seems that a better solution would be to break the problem up into “trust-able components” with a reliable/predictable demarcation, and limiting the trust between systems and networks.

Is there any reason – any at all – that a malicious update some Ukrainian tax software should end up infecting unrelated subsidiaries/parent companies in other countries, or hospitals in the US?

One of the issues I see with such a strategy is that it necessarily causes IT to cost more, almost regardless of how it’s implemented.  But without it, we end up with interconnections that criminals, nation-states, and others can leverage for mass destruction.  Particularly interesting to me is that the risk decisions of one organization can impact many, many organizations down stream – both from a “cyber contagion” but also from a simple economic perspective, if we consider the effects NotPetya caused on global shipping and WannaCry caused on delivery of health care.

 

Reflecting on the Need For Cyber Resilience

The recent NotPetya attacks disrupted the IT operations of many firms world wide, and as I write this, the Internet is littered with media reports of companies still struggling to return to operations nearly two weeks after the outbreak.  This style of attack is not new: we saw it in the Shamoon attack on Saudi Aramco, and in the Dark Seoul attack in South Korea, and in the attack on Sony, and most recently in WannaCry and now NotPetya.  I will readily admit that if you become the target of a government operation, the outcome is nearly assured no matter what you do to prepare.  NotPetya, though, should highlight to us that we don’t necessarily have to become collateral damage in “cyber war” between countries if we design and operate our systems appropriately.

IT security, at its core, is a set of trade-offs, though.  Some are good, some bad, the implications of some are understood, but often they are not.  I guess the best way to think about it is “we don’t go to cyber war with the network we want; we go to cyber war with the network we have”.  Recognizing that and the rapidly evolving techniques used by the more advanced adversaries, as well as the terrible track record those advanced adversaries have of keeping those tools and techniques secret, we need to recognize the need for cyber resilience.   I recognize the term “cyber resilience” is not going over well with many of you, but I don’t currently have a better term for the concept.  I believe it is important to distinguish cyber resilience from traditional disaster recovery programs.  I work a lot with US-based banks and US banking regulators that are part of the FFIEC.  A lot of my thinking has been shaped by the programs and guidelines the FFIEC has established in recent years regarding cyber resilience, and reflecting on that, I see many linkages between their guidance and these recent destructive attacks.

Many organizations view disasters as purely geographical events, and their programs are designed to address that.  Off-site backups, hot, warm, or cold systems in remote data centers, and so on.  These make good sense when the threat being mitigated is a fire, flood, tornado, or volcano.  But cyber attacks happen orthogonal to location – along technology boundaries, rather than on geographic boundaries.  A great example was Code Spaces, who operated their environment in AWS.  Code Spaces promised a highly fault tolerant environment, replicating data across multiple AWS data centers on multiple continents.  Sadly, when their AWS keys were stolen, attackers deleted all traces of Code Space’s data from all those redundant data centers.  In the recent NotPetya attacks, a number of victims had their entire server environments wiped out, including replicated systems, geographically distributed fail over systems, and so on.  Consider what would happen to an organization’s geographically distributed Active Directory infrastructure during a NotPetya outbreak.  There is no restoration; only starting over.

Maybe starting over is good, but I’m guessing most victims impacted in that way would rather plan the project out a bit better.

That takes me back to cyber resilience.  I recognize that most of us don’t see our organizations as being in the line of fire in a cyber war between Russia and the Ukraine.  I am sure the three US hospitals hit by NotPetya certainly didn’t consider themselves in that firing line.  It is hard to predict the future, but it seems like a safe bet that we are going to see more attacks of the Dark Seoul and NotPetya variety than less.  And as time goes on, we are all becoming interconnected in ways that we may not really understand.  If your organization is sensitive to IT outages in its operations, I would recommend putting some focus on the concept of cyber resilience in your strategic plans.  At the risk of offending my friends in the banking industry, I’d recommend taking a look at the FFIEC’s business continuity planning material.  It has some good ideas that may be helpful.

NotPetya, Complex Attacks, and the Fog of War

I cannot recall a previous widespread incident that created confusion and misdirection the way NotPetya did.  I want to use this post to examine a bit of what happened and what we can learn.

On the morning of June 27, Twitter was abuzz with discussions about a new variant of the Petya ransomware spreading everywhere.  Early reports indicated that the Petya was being introduced into networks via infected email attachments

I strongly suspect that at least some of the organizations affected by the outbreak were making a connection that likely turned out to be coincidental, rather than causal.  If I see a evidence someone received a suspicious email attachment – something that happens all day every day in a large company, and then suddenly that computer reboots and begins locking itself up, I suspect most of us would draw that same conclusion, and because it fits so neatly in our daily experience in defending the network, convincing us otherwise can be difficult.  I do not know what, if any, net effect this misdirection may have had on the overall NotPetya story, but it seems likely that there were at least some security teams spending time locking down email to prevent becoming a victim.

As it turns out, NotPetya was introduced to victim networks via the update process to the ME Doc tax software in widespread use in the Ukraine and leverage the compromised infrastructure of Intellect Service, who makes ME Doc.  There are, however, some outliers, such as the three hospitals in the US who were infected.  There is no word on how hospitals in the US came to be infected with seemingly no tie to the ME Doc software.  My best guess is that malware propagated via connections to other entities that did use ME Doc.  Merck, for example, was one of the companies infected.  I can envision a number of possible scenarios where an infection at a vendor propagates to a hospital in the US.  For example, a Merck sales person may have been visiting a hospital and VPN’d back to the mothership when her computer was infected and began spreading locally within the hospital network.  Or maybe a VPN or other remote access connection that Merck uses to monitor equipment or inventory, or something else.  I want to emphasize, by the way, that I use Merck here for the sake of argument – I have no idea if they were in any way involved in spreading to these hospitals, and even if they were, they were also a victim.

Discussions throughout the day on June 27 focused on the new Petya variant’s use of the ETERNALBLUE vulnerability/exploit to propagate within an organization.  That turned out to be true, but the focus on this aspect of the malware likely detracted from the bigger picture.  Many organizations, no doubt including those that were, or would soon be affected, were likely scrambling to track down systems missing the MS17-010 patch, and grilling sysadmins on why they neglected to patch.  Reports by that afternoon, however, indicated that fully patched systems were being infected.  We now know that ETERNALBLUE was just one of the mechanisms used to propagate, and that NotPetya included code from mimikatz to pull credentials from memory on infected systems, and a copy of psexec to run commands on other systems on the local network using the gathered credentials.  At the time, however, security advice being thrown around was essentially that which helped prevent WannaCry.  We were fighting the last war, not the current one.  Rather than address the crux of the problem, which included password reuse across systems, excessive privileges, and so on, we saw, and continue to see, advice that includes blocking ports 139 and 445 at the firewall, among other unhelpful nuggets.  Those recommendations are not wrong generally, but were not helpful for this case.  I tried to round up the things that do help here.

Days later, security companies started proclaiming the the Petya outbreak was definitely not really Petya, only loosely based on Petya, and not intended as a ransomware attack at all, but rather a nation-state attack against the Ukraine.

We focused heavily on the ransomware/system wiping aspect of this outbreak.  Many organizations rebuilt and restored many systems wiped by NotPetya.  Some victims, including one of the hospitals mentioned, decided to start over and buy all new systems.  Finally, and possibly most significantly, the latest news is that the adversary behind the NotPetya outbreak had compromised the update server of Intellect Service and likely had the ability to remotely control and collect information from the systems of many thousands of ME Doc users.

This episode highlights, to me at least, the need to keep a clear head during an incident and to be open to revising our understanding of what is happening and what our response should be.

Designing a Defensible Network

2017 has been an eventful year so far in the information security works, with the return of the network worm used in apparent nation-state attacks.  With the most recent attack, alternatively known as Petya an NotPatya, among other names, the focus among many in the industry, particularly early on, was that it spread via EternalBlue, and whether or not an infection in a company indicated bad patch hygiene.  Much debate continues to rage over the initial method of infection, with some reliable sources indicating that the malware was seeded into Ukrainian companies through an update to the ME Doc tax application, and others indicating that it was delivered via a malicious email attachment.

These debates seem to miss the larger point.  While it’s interesting to know how a particular threat like WannaCry or [Not]Petya initially entered a network and the means by which it propagated between systems, there are many possible ways for the next threat to enter, and many ways for it to propagate.  Implementing tactical changes to defend against yesterday’s threat may not be the best plan.

On the Defensive Security Podcast, we poke fun at the blinky box market place, and I particularly rail on Active Directory.  I believe the WannaCry and [Not]Petya outbreaks exemplify the core concerns that we are trying to convey: as an industry, we seem to have gotten away from core principles, like least privilege, and are looking to stack supplementary security technology on top of ill-designed IT systems.  Our security technology mainstays, like antivirus and IPS, are constantly chasing the threats.   Those technologies are wholly ineffective against broadly and rapidly propagating worms, though.  Hopefully, these recent events cause a rethink in fundamental security strategies, rather than a search for the next technology that promises to deliver us from the perils of worms.

Having said that, here are a few fundamental, completely unsexy, things we can do to mitigate these types of attacks in the future:

  1. Use unique local administrator passwords on every endpoint.
  2. Disable network logins for local administrators on every endpoint.
  3. Implement properly designed, limited, and segmented Active Directory permissions.
  4. Implement the secure administrator workstation concept.
  5. Implement network port isolation.
  6. Block connections to Windows services for all systems to and from the Internet.
  7. Disable unused protocols, including SMBv1 (and continue to monitor for newly deprecated protocols).
  8. Apply patches quickly.
  9. Remove local administrator permissions.
  10. Implement application whitelisting.

All recommendations like these have a shelf life.  That is why we need smart people who monitor the threat landscape.  If we do a good job of preventing the basic tactics, the adversary will inevitably move to more complex methods.

Nightmare on Petya Street

Just some notes for myself that others may also find useful:

Initial propagation allegedly Medoc auto updates, though vendor denies it

Image posted on twitter, attribution intentionally missing:

https://twitter.com/thedefensedude/status/879764193913716737

Good write up by Brian Krebs indicating how the malware obtained credentials to propagate
Mitigations:

Create c:\widows\perfc.dat and make it read only:

https://twitter.com/hackingdave/status/879779361364357121

Apply MS17-010 and disable admin$ shares via GPO

After reboot, system appears to be running fsck, but this is actually files being encrypted.  Shut the system down immediately if that happens to enable file recovery using a boot disk.