Behavioral Economics and Information Security

I recently finished reading Dan Ariely’s “Predictably Irrational” book series about behavioral economics and the impacts of cognitive biases on behaviors and decision making.  The lessons from behavioral economics seem, to me at least, to have significant implications for information security.  I was a bit surprised at the apparent lack of study around this linkage.  Maybe it shouldn’t be all that surprising.  One paper I did find, “Information Security: Lessons from Behavioural Economics” by Michelle Baddeley, focuses on the impact of cognitive biases on decisions involving privacy, social media security, and so on.  The point of the paper is illustrating the need to factor lessons from behavioral economics into the design of security policies and regulations and that policies and regulations should recognize the influence of cognitive biases, emotions, limited information, and so on, rather than assuming the people have equal access to facts and can make economically rational decisions.

There seems to be another important angle to consider: the impacts of limited information, cognitive biases and associated psychological factors related to decision making on those of us working to defend organizations.  This is an uncomfortable area to tread.  As security people, we are apt to talk about the foibles of the common user; debating whether we can train them to avoid security pitfalls, or whether it’s a lost cause and our only real hope is building systems that don’t rely on people recognizing and avoiding threats.

I spend a lot of time thinking about the causes of breaches: both those that I’m involved in in investigating and those that are documented in the media.  I can see indicators that the causes of at least some breaches likely stem from similar cognition problems described by behavior economics.

For instance, a common error which has resulted in a number of significant breaches is very basic network architecture, specifically not recognizing that a particular configuration enables a relatively straight forward and quite common method for moving about a network.

The reasons why this happens are fascinating to me.  Clearly, I don’t know with certainty why they happened in most cases, but all possible reasons are interesting unto themselves.

At the end of the day, we need to be efficient and effective with our information security programs.  I can look at strategic information security decisions I have made and see the influence of some biases which are plainly described in Mr. Ariely’s research.  I expect this will be the beginning of a series of posts as I start to delve more deeply into the topic.  In the meantime, I am very curious to hear whether others have already thought about this and what conclusions might have been drawn.

Some recommended reading:

Dan Ariely’s Irrational bundle

Douglas Hubbard’s How To Measure Anything and The Failure of Risk Management

The Elephant In The Room With Preventing DDOS Attacks

DDOS attacks have been a regular fixture in infosec news for some time now. Primarily, those attacks have been using open DNS resolvers, though recently NTP flared up as a service of choice. The community made dramatic improvements in the number of NTP servers which were susceptible to being used in DDOS attacks in a pretty short amount of time. However, both open resolvers and NTP continue to be a problem. And there are likely to be other services targeted in the future, like SNMP.

One common theme is that these services are UDP-based, and so it’s trivial to spoof a source IP address and get significant traffic amplification directed toward the victims of these DDOS attacks.

While I think it’s necessary to focus on addressing the open resolver problem, NTP and similar issues, I’m very surprised that we, as a community, are not pushing to have ISPs implement a very basic control that would dramatically restrict these kinds of attacks: simple source address egress filtering.

Yes, this likely puts additional load on routers or firewalls, but it’s pretty basic hygiene to not allow packets out of a network with a source address which that ISP does not announce for. I am sure there are some edge case exceptions, such as with asymmetric routing, but it should be manageable between the customer and the ISP.

So, each time we hear about a DDOS attack and ponder the pool of poorly configured DNS servers, I propose that we should also be pondering the ISPs who allow traffic out of their networks with a source address that is clearly spoofed.

Lessons From The Neiman Marcus Breach

Bloomberg released a story about a forensic report from Protiviti detailing the findings of their investigation into the Neiman Marcus breach. There are very few details in the story, but what is included is quite useful.

First, Protiviti asserts that the attackers who breached Neiman do not appear to be the same as those who breached Target. While this doesn’t seem like a major revelation to many of us, it does point out that there are numerous criminals with the ability to successfully pull off such attacks. And from this, we should consider that these “sophisticated” attacks are not all that hard to perpetrate given the relative reward.

Next, Protiviti was apparently unable to determine the method of entry used by the attackers. While that is unfortunate, we should not solely focus on hardening our systems against initial attack vectors, but also apply significant focus to protecting our important data and the systems that process and store that data. Criminals have a number of options to pick from for initial entry, such as spear phishing and watering hole attacks. We need to plan for failure when we design our systems and processes.

The activities of the attackers operating on Neiman systems apparently created nearly 60,000 “alerts” during the time of the intrusion. It is very hard to draw specific conclusions because we don’t actually know what kind of alerts are being referenced. I am going to speculate, based on other comments in the article, that the alerts were from anti-virus or application white listing:

…their card-stealing software was deleted automatically each day from the Dallas-based retailer’s payment registers and had to be constantly reloaded.

…the hackers were sophisticated, giving their software a name nearly identical to the company’s payment software so that any alerts would go unnoticed amid the deluge of data routinely reviewed by the company’s security team.

The company’s centralized security system, which logged activity on its network, flagged anomalous behavior of a malicious software program though it didn’t recognize the code itself as malicious or expunge it, according to the report. The system’s ability to automatically block the suspicious activity it flagged was turned off because it would have hampered maintenance, such as patching security holes, the investigators noted.

The 59,746 alerts set off by the malware indicated “suspicious behavior” and may have been interpreted as false positives associated with legitimate software.

However, some of these comments are a bit contradictory. For instance:

payment registers and had to be constantly reloaded

And

it didn’t recognize the code itself as malicious or expunge it

In any event, a key take away is that we often have the data we need to detect that an attack is underway.

Next is a comment that highlights a common weakness I covered in a previous post:

The server connected both to the company’s secure payment system and out to the Internet via its general purpose network.

Servers that bridge network “zones”, as this Neiman server apparently did, are quite dangerous and exploitation of them tends to be one of the common traits of many breaches. Such systems should be eliminated.

Finally, a very important point from the story to consider is this:

The hackers had actually broken in four months earlier, on March 5, and spent the additional time scouting out the network and preparing the heist…

This should highlight for us the importance of a robust ability to detect malicious activity on our network and systems. While some attacks will start and complete before anyone could react, many of these larger, more severe breaches tend to play out over a period of weeks or months. This has been highlighted in a number if industry reports, such as the Verizon DBIR.

One Weird Trick To Secure You PCs

Avecto released a report which analyzed recent Microsoft vulnerabilities and found that 92% of all critical vulnerabilities reported by Microsoft were mitigated if when the exploit attempt happened on an account without local administrator permissions. Subsequently, there has been a lot of renewed discussion about removing admin rights as a mitigation from these kinds of vulnerabilities.

Generally, I think it’s a good idea to remove admin rights if possible, but there are a number of items to think about which I discuss below.

First, when a user does not have local administrator rights, a help desk person will generally need to remotely perform software installs or other administrative activities on the user’s behalf. This typically involves a support person logging on to the PC using some manner of privileged domain account which was configured to have local administrator rights of the PCs. Once this happens, a cached copy of the login credentials used by the support staff are saved to the PC, albeit in a hashed manner. Should an attacker be able to obtain access to a PC using some form of malware, she may be able to either brute force recover the password from the hash or use a pass-the-hash attack, which would grant the attacker far broader permissions on the victim organization’s network than a standard user ID would. Additionally, an attacker who already has a presence on a PC may use a tool such as mimikatz to directly obtain the plain text password of the administrative account.

You might be thinking “but, if I remove administrator rights, attackers would be very unlikely to gain access to the PC in manner to steal hashes or run mimikatz, both of which require at least administrator level access. What gives?”

That is a good question which dovetails into my second point. The Avecto report covers vulnerabilities which Microsoft deems the severity to be critical. However, most local privilege escalation vulnerabilities I could find are only rated Important by Microsoft. This means that if even if you don’t have administrator rights, I can trick you into running a piece of code of my choosing, such as one delivered through an email attachment or even using a vulnerability in another piece of code like Java, Flash Player or PDF reader, and my code initially would be running with your restricted permissions, however my code could then leverage a privilege escalation flaw to obtain administrator or system privileges. From there, I can then steal hashes or run mimikatz. Chaining exploits in attacks is not all that uncommon any longer, and we shouldn’t consider this scenario to be so unlikely that it isn’t worth our attention.

I’ll also point out that many organizations don’t quickly patch local privilege escalation flaws, because they tend to carry a lower severity rating and they intuitively seem less important to focus on, as compared to other vulnerabilities which are rated critical.

Lastly, many of the recent high profile, pervasive breaches in recent history heavily leveraged Active Directory by means of credential theft and subsequent lateral movement using those stolen credentials. This means that the know-how for navigating Active Directory environments through credential stealing is out there.

Removing administrator rights is generally a prudent thing to do from a security standpoint. A spirited debate has been raging for years on whether removing administrator rights costs money, in the form of additional help desk staff who now have to perform some activities which users used to do themselves and related productivity loss by the users who now have to call the help desk, or is a net savings because there are less malware infections, less misconfigurations by users, less incident response costs, and associate higher user productivity, or if those two factors simply cancel each other out. I can’t add a lot to that debate, as the economics are going to be very specific to each organization considering removing administrator rights.

My recommendations for security precautions to take when implementing a program to remove admin rights are:
1. Prevent Domain Administrator or other accounts with high privileges from logging into PCs. Help desk technicians should be using a purpose-created account which only has local admin rights on PCs, and systems administrators should not be logging in to their own PCs with domain admin rights.
2. Do not disable UAC.
3. Patch local privilege escalation bugs promptly.
4. Use tools like EMET to prevent exploitation of some Oday privilege escalation vulnerabilities.
5. Disable cached passwords if possible, noting that this isn’t practical in many environments.
6. Use application whitelisting to block tools like mimikatz from running.
7. Follow a security configuration standard like the USGCB.

Please leave a comment below if you disagree or have any other thoughts on what can be done.

H/T @lerg for sending me the story and giving me the idea for this post.

Excellent Paper Prioritizing Security Controls To Mitigate Intrusions

The Australian Defense Signals Directorate released a paper the prioritizes mitigation techniques by effectiveness. Even better, they provide subjective assessments of user resistance, upfront and ongoing costs for each mitigation strategy.

I think it is quite telling that the most effective control is application whitelisting.

H/T to @Lerg for finding this.

What The Target Breach Can Teach Us About Vendor Management

A recent report by Brian Krebs identified that Fazio Mechanical, the HVAC company who was compromised and used to attack Target, was breached through an “email attack”, which allegedly stole Fazio’s credentials.

In my weekly security podcast, I rail pretty hard on workstation security, particularly for those systems which have access to sensitive systems or data, since attacking the workstation has become a common method for our adversaries. And hygiene on workstations is generally pretty terrible.

However, I am not going to pick on that aspect in this post. I want to explore this comment in Krebs’ story:
“But investigators close to the case took issue with Fazio’s claim that it was in full compliance with industry practices, and offered another explanation of why it took the Fazio so long to detect the email malware infection: The company’s primary method of detecting malicious software on its internal systems was the free version of Malwarebytes Anti-Malware.”

This assertion seems to be hearsay by some unknown sources rather than fact, although Krebs’ sources tend to be quite reliable and accurate in the past. I am going to focus on a common problem I’ve seen which is potentially demonstrated by this case.

I am not here to throw rocks at Target or Fazio. Both are victims of a crime, and this post is intended to be informative rather than making accusations, so I will go back to the fictitious retailer, MaliciousCo, for this discussion. We know that MaliciousCo was compromised through a vendor who was itself compromised. As I described in the last post, MaliciousCo has a robust security program, which includes vendor management. Part of the vendor management program includes a detailed questionnaire which is completed annually by vendors. A fictitious cleaning company, JanitorTech, was compromised and led to the breach of MaliciousCo.

Like Fazio, JanitorTech installed the free version of Malwarebytes (MBAM) on its workstations and an IT person would run it manually on a system if a user complained about slowness, pop-ups or other issues. When MaliciousCo would send out its annual survey, the JanitorTech client manager would come to a question that read: “Vendor systems use anti-virus software to detect and clean malicious code?” and answer “yes” without hesitation because she sees the MBAM icon on her computer desktop every day. MaliciousCo saw nothing particularly concerning in the response; all of JanitorTech’s practices seem to align well with MaliciousCo policies. However there is clearly a disconnect.

What’s worse is that MaliciousCo’s vendor management program seems to be oblivious to the current state of attack techniques. The reliance on anti-virus for preventing malicious code is a good example of that.

So, what should MaliciousCo ask instead? I offer this suggestion:
– Describe the technology and process controls used by the vendor to prevent, block or mitigate malicious code.

I have personally been on both sides of the vendor management questionnaire over the years. I know well how a vendor will work hard to ‘stretch’ the truth to in order to provide expected answers. I also know that vendor management organizations are quick to accept answers given without much evidence or inspection. Finally, I saw that vendor management questionnaires, and the programs behind them, tend not to get updated to incorporate the latest threats.

This should serve as an opportunity for us to think about our own vendor management programs: how up-to-date they are, and whether there is room for the kind of confusion demonstrated in the JanitorTech example above.

What The Target Breach Should Tell Us

Important new details have been emerging about the Target breach. First came news that Fazio Mechanical, an HVAC company, was the avenue of entry into the Target network, as reported by Brian Krebs.

This started a firestorm of speculation and criticism that Fazio was remotely monitoring or otherwise accessing the HVAC units at Target stores and that Target connected those HVAC units to the same networks as POS terminals and, by extension, was not complying with the PCI requirement for 2 factor authentication for access to the environment containing card data, as evidenced by Fazio’s stolen credentials leading to the attackers having access to the POS networks.

Fazio Mechanical later issued a statement indicating that they do not perform remote monitoring of Target HVAC systems and that “Our data connection with Target was exclusively for electronic billing, contract submission and project management.”

In a previous post on this story, I hypothesized about the method of entry being a compromised vendor with access to a partner portal, and the attacker leveraging this access to gain a foot hold in the network. Based on the description of access in Fazio Mechanical’s statement, this indeed appears to be exactly what happened.

We still do not know how the attacker used Fazio’s access to Target’s partner systems to gain deeper access into Target’s network. Since the point of this post is not to speculate on what Target did wrong, but rather what lessons we can draw from current events, I will go back to my own hypothetical retail chain, MaliciousCo (don’t let the name fool you, MaliciousCo is a reputable retailer of fine merchandise). As described in my previous post, MaliciousCo has an extranet which includes a partner portal for vendors to interact with MalicousCo, such as submitting invoices, processing payments, refunds and work orders. The applications on this extranet are not accessible from the Internet and require authenticated VPN access for entry. MaliciousCo’s IT operation has customized a number of applications used to for conducting business with its vendors. Applications such as this are generally not intended to be accessible from the Internet and often don’t get much security testing to identify common flaws, and where security vulnerabilities are identified, patches can take considerable time for vendors to develop and even longer for customers to apply. In MaliciousCo’s case, the extranet applications are considered “legacy”, meaning there is little appetite and no budget to invest in them, and because they were highly customized, applying security patches for the applications would take a considerable development effort. Now, MaliciousCo has a robust security program which includes requirements for applying security patches in a timely manner. MaliciousCo’s IT team assessed the risk posed by not patching these applications and determined the risk to be minimal because of the following factors:

1. The applications are not accessible from the Internet.
2. Access to the extranet is limited to a set of vendors who MaliciousCo’s vendor management program screens for proper security processes.
3. There are a number of key financial controls outside of these applications that would limit the opportunity for financial fraud. An attacker couldn’t simply gain access to the application and start submitting invoices without tripping a reconciliation control point.
4. The applications are important for business, but down time can be managed using normal disaster recovery processes should some really bad security incident happen.

Given the desire to divert IT investment to strategic projects and the apparently small potential for impact, MaliciousCo decides against patching these extranet applications, as other Internet accessible application receive. Subsequently, MaliciousCo experiences a significant compromise when an attacker hijacks the extranet VPN account of a vendor. The attacker identified an application vulnerability which allowed a web shell to be uploaded to the server. The attacker then exploited an unpatched local privilege escalation vulnerability on the Windows OS which hosts the extranet application and uses these privileges to collect cached Active Directory credentials for logged in administrators using a combination of mimikatz and JtR. While the extranet is largely isolated from other parts of the MaliciousCo network, certain network ports are open to internal systems to support functionality like Active Directory. From the compromised extranet application server, the attacker moves laterally, first to an extranet domain controller, then to other servers in the internal network environment. From here, the attacker is able to access nearly any system in the MaliciousCo environment, create new Active Directory user IDs, establish alternative methods of access into the MaliciousCo network using reverse shell remote access trojans, mass distribution of malware to MaliciousCo endpoints, collection and exfiltration of data, and so on.

MaliciousCo didn’t fully understand the potential impacts resulting from a compromise of its extranet applications when evaluating the security risks associated with those applications.

We don’t know what happened yet in the case of Target, and MaliciousCo is just a story. But, scenario has apparently played out at organizations like DigiNotar, the State of South Carolina and many others.

Why does this happen?

In my view, the problem is largely a failure to understand the capabilities and common tactics of our adversaries, along with an incomplete understanding of the interplay within and between complex IT system, Active Directory in particular. I intently follow the gory details of publicly disclosed breaches and it is clear to me that attackers are following a relatively common methodology which often involve:
– gaining initial entry through some mechanism (phishing, web app vulnerability, watering hole)
– stealing credentials
– lateral movement via systems which have connectivity with each other using stolen credentials
– establishing a ‘support infrastructure’ inside the victim network
– establishing persistence on victim systems
– identifying and compromising targets using stolen or maliciously created credentials or other via hijacking standard management tools employed by the victim
– exfiltration (or other malicious action)

While we don’t know the details of what happened in the case of Target, it seems quite clear that the attacker was able to laterally move from a partner application server onto networks where POS terminals reside. The specific means by which that happened are not clear and indeed we may never know for sure.

I believe that we, as defenders, need to better understand the risks posed by situations like this. I am not proposing that such security risks must always require action. Rather, based on my experience in IT, I believe these risks often go unidentified, and so are implicitly accepted due to lack of awareness, rather that consciously evaluated.

In the next post, I cover what we can learn regarding the security of vendors based on what has been disclosed about the Target breach so far.

Thoughts On Avoiding The Complex POS Attacks

I have been watching the Target breach story unfold with great interest. In full disclosure, I have no insight into what has happened at Target, beyond the reports that are publicly available. What follows is purely hypothesis and speculation for the purposes of identifying potential mitigations for what may have happened.

Clearly, we don’t know the precise details of how the attack was carried out, however there has been a lot of analysis of various aspects of what is known, including this report from SecureWorks. Malcovery has also released a report speculating the method of entry based on the file hashes provided by in an earlier report by iSight. I am most interested in identifying how the attack happened and what can be done to defend against such an attack. The SecureWorks report provides a good high level list of activities, but not a lot of specificity. For example:

– Firewall ACLs — Access control lists (ACLs) at network borders can be an effective short-term mitigation technique against specific hosts during an active incident when response policies dictate that network traffic to a hostile host be terminated.
– Network segmentation — Organizations should segment PCI networks to restrict access to only authorized users and services.

Among many others are great concepts. However, my observation is that these are not deterministic states, rather they are subjective. What is an “authorized user or service”?

Malcovery believes that the Target attack likely began with a web server being compromised with an SQL injection attack. Let’s assume this is true for a moment in my hypothetical retailer MaliciousCo (oddly, the victim). My web server is on a dedicated network segment. But my site is, of course, a web app connected to a database server. My web site needs to connect to my SQL server, but I don’t want my SQL server hanging out on a network that is accessible to the Internet, even if I don’t allow Internet-originated traffic to the SQL server itself, so I put it on an internal network, because I have other business applications and processes that need to access that same database server. Now, I have a legitimate case where my web site and SQL server are authorized to talk to one another. Because I am a diligent architect, I even route the traffic between the web and SQL servers through an IPS. However, I have created a path into my organization from my web server to my internal systems.

One of the first lessons here, assuming this is the case, is that there should not be ANY connectivity between external server networks and internal networks. The one caveat I would extend is to allow INBOUND traffic from limited internal hosts and the external networks. Outbound traffic into internal networks is not permitted. Not for SQL, not for active directory, not for anything. Additionally, outbound traffic to the Internet should be blocked from hosts on Internet accessible networks too. Only allowing inbound connections from the Internet. The exception might be a very specific mechanism for accessing a payment gateway.

Having done this, any intrusion into my web server is contained on the server itself, along with any other systems that might be on that same network, and there is no practical avenue of lateral movement into the innards of my MaliciousCo network.

Interestingly, in the case of Target, I doubt very highly that the problem involved the main web environment, which includes their online retail operation. We know that the breach didn’t involve the online part of Target’s business. We have also heard Target make reference to a vendor’s credentials being used to commit the breach. At this point, it’s not at all clear exactly what they meant, but I theorize Target is referring to the BMC Patrol user ID and password seen hard coded in the POS malware. However, this opens up another line of consideration: extranets or vendor portals. I have no insight into whether Target actually has such a thing, but my hypothetical mega retailer MaliciousCo does. This vendor portal is used by vendors to receive orders, submit invoices, communicate shipment information and so on. This portal is wholly separate from my main web presence. Access to the vendor portal is obtained via an authenticated VPN and isn’t accessible to the Internet at large.

If one of my suppliers becomes compromised, an attacker might have access to my vendor portal. Since I don’t have any direct control, or even indirect control over my vendor’s security posture (yes, I have them complete a checklist once per year, but we both know this is a Kabuki dance), I opt to treat the vendor portal exactly as I do my Internet sites by isolating them. This effectively restricts the ability of an attacker controlling my vendor portal from lateral movement into my network.

Having said all of this, we don’t actually know how Target was breached. We know that a number of major breaches in the past have happened as a result of SQL injection on web servers. But, it’s also possible that the initial attack looked like Syrian Electronic Army attack, relying on iteratively more sophisticated and deeper spear phishing attacks. Or, maybe it perpetrated using a watering hole attack using a site of interest to the retail industry – after all, we are hearing that there are many retailers involved. Or maybe it’s an attack on Cold Fusion running somewhere in their environment. My point is that there are many windows of opportunity. If MaliciousCo does a stellar job of isolating the web environment, determined attackers are going to try another approach to get at my juicy POS terminals.

My POS terminals should be on a strictly isolated network with all required Supporting infrastructure contained on that network. The only exception being specific access to a payment gateway.

Planning for failure of other controls, my POS terminals themselves should be well locked down, using application white listing to block execution of any unknown software.

Configuring and isolating environments like this is inefficient, cumbersome and expensive. Our adversaries are clever and highly motivated. I am not proposing that we have to take these drastic and costly precautions; we can continue to optimize the design and operation of our IT environments around the axis of efficiency, but we should not feign surprise when major breaches occur. Breaches are inevitable where we have the intersection of means, opportunity and incentives. We can’t do a lot about the means or incentives variables. But we do control the opportunity variable.

I’ll be following up with a few more posts about different aspects we can learn from such as monitoring later.

By the way, I am not proposing that I have the only answer to this. This is a thought experiment and I encourage you to post your views, ideas or criticisms in the comments.

Clever New Phishing Lure

I thought I had seen every permutation of phishing lure imaginable.  Then, I looked at my email today and saw this:

funeralphish

 

Pretty clever.  As usual, the trick is to overwhelm the skepticism a person might have with curiosity to click on the link.  Nothing says reputable funeral home like justaskfreddie.com.

*** edit January 17, 2014
This tactic is now being reported in more mainstream news sources, like this one.

A Different Perspective On The PTV Website Vulnerability Debacle

The story about Australian teenager Joshua Rogers who identified a vulnerability in the website of the Public Transport Victoria (PTV) and subsequently reported the vulnerability to PTV, and PTV’s now infamous response of reporting Joshua to the police has been well covered.

Critics of PTV’s reaction point out that their behavior is a continuation of the terrible idea of demonizing and criminalizing security research, similar to the story of First State Superannuation of course, Weev’s famous dust up with ATT, among other similar stories.

The common thinking, as is pointed out in a CSO article on the PTV situation, is that those people with the ability to hack are bad or dangerous, ignoring the fact that many are indeed good people who are trying to help out and earn an honest living.

I suspect that the security community expected PTV to not only fix the vulnerability, which they did, but also to thank Joshua, or even reward him; certainly not report him to the police. This seems sensible, but I’d like to offer a different perspective on how this may have come about.

First, though, I’d like to point out that I am not defending PTV’s actions. I generally find the behavior of companies who respond to above-board reports of vulnerabilities in their products with lawsuits and threats of legal action to be reprehensible and dangerous to society. However, in this case, I see some logic.

In this particular case, PTV was notified that someone found a vulnerability that allowed access to a database containing customer information, including credit card numbers. This is a serious problem for PTV. I am not aware of the nature of the conversation or communication between PTV and Joshua, but I would bet PTV asked if Joshua had accessed any records, made any copies of the data or communicated the vulnerability to anyone else. Assuming that did happen, I would assume that Joshua, who appears to be an upstanding person, said no to those questions, assuming the answer really was no.

Would PTV be performing proper due diligence by accept Joshua’s word? If you were a PTV customer whose information was exposed, or a bank who would have to eat the cost of resulting credit card fraud, or even PTV itself who might be sued for damages, would you find this to be acceptable? Or would you rather demonstrate to those stakeholders that you took all reasonable actions by fixing the problem promptly and asking the police to investigate if data may have been stole ? Wouldn’t it seem more acceptable to let Joshua convince the police that he did not do anything nefarious with the information he had access to?

There is not enough information to know if this is really what motivated PTV, or if it indeed was the normal knee-jerk reaction to dangerous hacker-types who defiled their reputation with his supposed misdeeds of observing and reporting a security flaw. Time will likely tell.

***Update Jan 13, 2014
Dave Lewis posted the transcript of his interview with Joshua Rogers. The interview sheds more light on the situation, and it’s still hard to tell what PTV’s motivation for involving the police is. What is interesting, though, is that PTV never acknowledged Joshua’s report.

Photo by Brian Searle