Inside the Legislative and Regulatory Minefield Confronting Cybersecurity Researchers
19.6.18 securityweek Privacy

Legislation – especially complex legislation – often comes with unintended consequences. The EU’s General Data protection Regulation (GDPR), which came into force May 25, 18, is an example of complex legislation.

GDPR, and other cybersecurity laws, are designed to protect privacy and property in the cyber domain. There is, however, concern that many of these laws have a common unintended consequence: in protecting people from cybercriminals, the laws also protect cybercriminals from security researchers.

The question is whether security research an unintended but inevitable collateral damage of cybersecurity legislation. While focusing on GDPR, this examination will also consider other legislation, such as the CLOUD Act, the Computer Fraud and Abuse Act (CFAA), the Computer Misuse Act (CMA).

The WHOIS issue

One immediate example involves GDPR, the Internet Corporation for Assigned Names and Numbers (ICANN) and the WHOIS database/protocol. ICANN maintains a global database of internet domain registrations that has been readily available to security vendors and security researchers.

Researchers with one known malicious domain have been able to cross-reference details via WHOIS to locate, at speed, potentially thousands of other malicious domains registered at the same time or by the same person or with the same contact details.

However, registrant details of EU residents is now protected data under GDPR. ICANN can no longer share that data with third parties – effectively meaning that researchers can no longer access WHOIS data to discover potentially malicious domains and protect the public from spam or phishing campaigns involving those domains.

“Many critical cybersecurity activities;” explains Sanjay Kalra, co-founder and chief product officer at Lacework; “like spam mitigation and botnet takedowns – depend on publicly-available records to identify those responsible for cyber-attacks, botnets, or spam email campaigns. The Internet’s domain management system (ICANN) identifies a domain’s owner; and researchers use that information to identify the culprits responsible for some of the Internet’s most damaging attacks.”

In this example of an unintended consequence there is a desire on all sides to solve the problem. ICANN and the European regulators have been discussing the issue for many months; and on 17 May, ICANN adopted a Temporary Specification for gTLD Registration Data.

Cherine Chalaby, chair of the ICANN board of directors, blogged, “The adoption of this Temporary Specification is a major and historic milestone as we work toward identifying a permanent solution. The ICANN org and community have been closely working together to develop this interim model ahead of the GDPR’s 25 May 18 enforcement deadline, and the Board believes that this is the right path to take.”

This is one unintended consequence that might find a solution. “There is still hope to see a compromise where private data would stay protected while allowing the “good guys” to research and fight cybercrime,” comments ESET security intelligence team lead, Alexis Dorais-Joncas. “Some proposed solutions would allow for example certified 3rd parties such as law enforcement or researchers to access the redacted part of the WHOIS data.”

Laws Impacting Cybersecurity and Research

Nevertheless, the wider issue of unintended consequences on security researchers remains. Can researchers download stolen credentials for analysis; can they probe a potential C&C server and evaluate personal details; can they take over the email accounts of scammers for research purposes?

Laws, Regulations and Researchers
Privacy regulations, including GDPR, generally make some concessions – for example, for national security issues and for law enforcement investigations under certain circumstances. Independent researchers do not fall directly within either category.

Whether or not security research is permitted or denied by GDPR and other regulations is a complex issue with many different interpretations – and SecurityWeek spoke to several researchers for their understanding of the difficulties.

Erka Koivunen, CISO at F-Secure, suggests that potential problems are neither new nor restricted to GDPR. “Security research on stolen datasets has been problematic even under earlier laws,” he said; adding that other regulations such as the EU’s export control regulations and the Wassenaar Arrangement, “are almost hostile to security research and testing.”

He noted, however, that European regulators have not so far been too concerned about the ‘notification’ requirement of existing regulations (except for the telecoms-specific regulation).

His colleague, F-Secure’s principal security consultant Jaros³aw Kamiñski, widens the researchers’ problem from privacy laws (such as GDPR) to property laws (such as CFAA and CMA): “Obtaining data from C2 servers and cache hives may constitute computer break-in if authorization and access control mechanisms were circumvented in the process.”

Immediately, three important issues have been raised: adequate authorization; whether the regulators are strict in interpretation and enforcement; and the effect of other regulations (such as the U.S. Computer Fraud and Abuse Act and the UK’s Computer Misuse Act).

Authorization and other regulations
The ‘authorization’ issue includes a common belief that researchers can ignore regulations if they have been authorized to do so by the FBI. Luis Corrons, security evangelist at Avast, comments, “Researchers cannot legally hack into a potential C&C. The only way to do that is in cooperation with law enforcement.”

Josu Franco, strategy and technology advisor at Panda Security, has a similar viewpoint. “Private individuals or companies,” he told SecurityWeek, “do not have the right to hack back at servers, unless it is done by (or in collaboration with) law enforcement as part of an investigation. So, it would be illegal for a researcher to hack into a server by himself/herself and download its contents.”

Corrons believes this process could be made easier and clearer if governments were to more actively encourage collaboration between public agencies and private researchers.

However, it isn’t clear that the FBI in the U.S. and other agencies elsewhere can authorize otherwise illegal cyber activity. “I do not believe this is ever OK under any law, or at least not in the U.S.,” comments Brian Bartholomew, principal security researcher at Kaspersky Lab. “‘Hacking’ into any system implies unauthorized access, which is illegal under the Computer Fraud and Abuse Act.”

He continued, “While there have been proposals to allow network defenders to ‘hack back” under certain circumstances, such as the Active Cyber Defense Certainty (ACDC) Act, none have been enacted into law and they have raised significant concerns among various stakeholders in the cyber ecosystem. In my opinion, the legal implications of such an approach alone make such proposals problematic all around.”

Scott Petry, CEO and co-founder of Authentic8, has a similar view. “I’m not aware of instances where a government agency has legitimized hacking activity,” he said. “The FBI would not have jurisdiction in non-US regions, so the EU agencies would no more honor the FBI’s approval of the activity than U.S. law enforcement would do if Interpol or another EU agency legitimized an attack against US resources. So, I don’t think there’s a free pass that either organization can offer to parties in the other region."

The recent CLOUD Act adds a further complication. Technically, this allows the FBI to authorize an otherwise illegal action – indeed, the FBI can insist upon it. The FBI can now demand access to EU PII held by a U.S. company anywhere in the world – which could place that company in contention with GDPR.

Furthermore, some researchers believe that CLOUD will have a chilling effect on future U.S. research. “Given that pen-testing and research require either formal agreements or navigating challenging questions around legal-to-do research, the CLOUD Act is incredibly problematic,” comments Robert Burney, technical instructor at SecureSet. “Legally, a security researcher cannot test cloud infrastructure without breaking laws, and this increases that risk.” He fears that researchers will avoid testing cloud infrastructure at the same time as more and more companies are adopting it.

“This increase in policy and politics,” he suggests, “will prevent high quality research in the United States and reduce our overall security… This is an inherent risk in both the GDPR and CLOUD Act. Security researchers will need to know almost as much about legal policy as they do the computers they research – and this will slow our ability to improve overall security.”

Adam McNeil, senior malware intelligence analyst at Malwarebytes is less concerned. “The Computer Fraud and Abuse Act (U.S.) and the Computer Misuse Act (UK) prohibit unauthorized access to computer systems, but both offer protections for academic and private sector research. Taken together, security researchers have a somewhat clear responsibility: don’t access systems without authorization; and if vulnerabilities are found, submit the information via responsible disclosure practices.”

But still there are problems and difficulties. Joseph Savirimuthu, senior lecturer in law at the Liverpool Law School, points out that there is no formal definition distinguishing the researcher from the hacker. In the UK, “The Computer Misuse Act 1990 and new data breach notification rules do not distinguish between White Hat and Black Hat. Neither does the Fraud Act 2006.”

Interpretation and enforcement of GDPR
In the final analysis, what a law or regulation says is not as important as how the enforcers of the law (law enforcement or official regulators) respond to that law; and ultimately how judges interpret that law. GDPR is a particularly difficult example. Firstly, despite its ‘unifying’ intention, there is a degree of flexibility that allows different EU nations to implement the law according to national preferences.

Secondly, it is still ‘enforced’ by national regulators who may vary in the severity of their interpretation. The UK, for example, is traditionally more business-friendly in its application of privacy laws than some of its European partners – such as France and Germany.

Thirdly, there is inevitably a degree of ambiguity in words that must be translated into multiple languages. For example, when Julian Assange attempted to overturn his Sweden-issued arrest warrant, UK judges chose to use the French language version of the relevant EU law to assert its validity rather than the English language version (it was potentially not valid under a strict interpretation of the English language version of the same European law).

Such vagaries leave it far from clear whether security researchers will be allowed or discouraged to continue their work – and the security researchers themselves have widely different views.

Joel Wallenstrom, CEO at Wickr, is cautiously hopeful. “We don’t anticipate that GDPR will make security research more difficult than it already is for many infosec researchers. GDPR-specific implications aren’t yet clear. Enforcement actions after May 25 will certainly provide signal to the industry on where the EU regulation is steering. Downloading a database to perform forensic analysis does not necessarily translate into becoming an ‘data operator’ or ‘processor’ under GDPR.”

“The notion that legitimate security researchers would be held responsible for GDPR is silly,” comments Kathie Miley, COO of Cybrary. “As long as they are not downloading or processing personal data there is no applicability to GDPR. Also, there is no logical reason a white hat security researcher would need to download or process an EU citizen’s personal data.”

This is not a universal view. Researchers sometimes download stolen PII from the dark web or paste sites for their own analysis. And although it is commonly held that provided researchers treat PII responsibly and in accordance with the principles of GDPR, they will not be held to account, there is no such guarantee within GDPR or other cyber legislation.

Robin Wood (aka DigiNinja) is an independent penetration tester who has done much work on passwords. When asked if GDPR would make him think twice about downloading PII (in the form of stolen and dumped user IDs and passwords), he replied, “I don’t know whether technically it would be a breach to hold the data, but I tend to hold these types of things for fairly short periods then get rid of them. When I publish things, it is always aggregated enough to be anonymized, so the publishing shouldn’t be an issue.”

He doesn’t know the answer to the problem, but does not intend to change his current behavior.

Precedent
A common problem with many laws is that they tend to be abused by law enforcement agencies seeking to find legal justification for a preferred course of action. ‘Overreach’ is a common criticism.

Examples could include the U.S. government’s use of the Stored Communications Act of 1986 to issue a search warrant on Microsoft for data held in Ireland – an action that is now mooted by the passing of the CLOUD Act.

In 2012, one-time self-styled Anonymous spokesperson Barret Brown was indicted on charges that included posting a link to stolen Stratfor data that was already available on the internet. Around the same time, Jeremy Hammond was being charged with involvement in the actual Stratfor hack, apparently having been urged to do so by the FBI informant Hector Xavier Monsegur (aka Sabu) of the LulzSec hacking crew.

While Hammond eventually pled guilty and received the maximum 10 years jail time for ‘doing’ the hack; Brown faced 45 years for posting a link to some proceeds from the hack.

However, perhaps the iconic example of overreach accusations involved Aaron Swartz, the co-founder of Reddit. Swartz was accused of breaking into MIT and illegally downloading millions of academic articles from the subscription-only JSTOR. It is claimed he simply felt that these academic articles should be freely available to everyone.

Swartz faced charges of computer fraud, wire fraud and other crimes carrying a maximum sentence of 35 years in prison and a $1 million fine. He committed suicide – and the charges were posthumously dropped.

These are rare occurrences and do not directly relate to security research; but they demonstrate that laws can potentially be misused or used for purposes not originally intended by the lawmakers.

A view from the UK GDPR regulator
Given the scope for confusion over the effect of GDPR on security researchers, SecurityWeek approached the UK Information Commissioner’s Office for comment.

An ICO spokesperson told SecurityWeek: “The ICO recognizes the valuable work undertaken by security researchers. Data protection law neither prevents nor prohibits such activity. As the GDPR states, processing personal data for the purposes of network and information security (and the security of related services) constitutes a legitimate interest of the data controller concerned. Organizations that do this must nevertheless ensure that the processing is essential for these purposes, is proportionate to what they are trying to achieve, and that they follow the GDPR’s requirements.

“Further,” he added, “the Data Protection Bill [the UK-specific law being readied to replace GDPR following Brexit] also includes a specific defense relating to the re-identification of de-identified personal data; eg where a security researcher may seek to test the effectiveness of the de-identification process.”

(It is worth noting that there is no guarantee that the UK’s GDPR replacement will be considered ‘adequate’ by the EU post-Brexit. This could introduce further complications for UK researchers. The same activity could be considered legal by the ICO, but illegal by, for example, the French CNIL regulator.)

The GDPR statement on the legitimate interest of security researchers is found in recital 49. In full, it states:

“The processing of personal data to the extent strictly necessary and proportionate for the purposes of ensuring network and information security, i.e. the ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted personal data, and the security of the related services offered by, or accessible via, those networks and systems, by public authorities, by computer emergency response teams (CERTs), computer security incident response teams (CSIRTs), by providers of electronic communications networks and services and by providers of security technologies and services, constitutes a legitimate interest of the data controller concerned.”

Noticeably, it does not state that ‘security research’ is allowed – only that some forms of research strictly limited to what is ‘necessary and proportionate’ can be defined as a ‘legitimate interest’.

Summary
Cyber laws are good faith attempts by lawmakers to protect the privacy and computer-related property of internet users. The difficulty with all laws is that once they are enacted, interpretive control passes to the law enforcers and judiciary.

Different jurisdictions may treat the same law differently, while different judges may interpret the letter of the law differently.

The UK ICO has made it clear that it does not believe that genuine research is precluded by the GDPR – indeed, recital 49 specifically allows it. Nevertheless, even the allowance of security research hinges on the value judgment of what is meant by ‘strictly necessary and proportionate for the purposes of ensuring network and information security’.

It is likely that security researchers who treat personal data in the way companies are required to treat personal data, and who delete it after use, will not be treated as in breach of GDPR.

“Researchers need to have a legitimate reason to have to work with the data,” explains Michael Aminzade, VP global compliance and risk service at Trustwave. “If they have reason because they are looking into the impacts of a breach then for the time of the research they will be OK to work with this data, but they need to work with it within the bounds of the regulations. Once the research is finished this data can’t be kept ‘in case’ of future research, as there is no legitimate need at that point – so the data should be deleted within the requirements of the regulation.”

However, where research requires breaking into and analyzing data found on a suspected criminal C&C server, the researcher is on less solid ground. The consensus among security firms is that this should only ever be done in conjunction with, for, or on behalf of, law enforcement in an ongoing investigation. However, it is unlikely that law enforcement approval has any actual weight in law.

The reality is that all forms of active security research must tread a very fine line between legal and illegal activity – and ultimately it will be up to the courts to decide on the legality or illegality of any specific regulator-challenged research.