The Malicious Use of Artificial Intelligence in Cybersecurity
28.3.2018 securityweek Cyber

Artificial Intelligence Risks

Criminals and Nation-state Actors Will Use Machine Learning Capabilities to Increase the Speed and Accuracy of Attacks

Scientists from leading universities, including Stanford and Yale in the U.S. and Oxford and Cambridge in the UK, together with civil society organizations and a representation from the cybersecurity industry, last month published an important paper titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

While the paper (PDF) looks at a range of potential malicious misuses of artificial intelligence (which includes and focuses on machine learning), our purpose here is to largely exclude the military and concentrate on the cybersecurity aspects. It is, however, impossible to completely exclude the potential political misuse given the interaction between political surveillance and regulatory privacy issues.

Artificial intelligence (AI) is the use of computers to perform the analytical functions normally only available to humans – but at machine speed. ‘Machine speed’ is described by Corvil’s David Murray as, “millions of instructions and calculations across multiple software programs, in 20 microseconds or even faster.” AI simply makes the unrealistic, real.

The problem discussed in the paper is that this function has no ethical bias. It can be used as easily for malicious purposes as it can for beneficial purposes. AI is largely dual-purpose; and the basic threat is that zero-day malware will appear more frequently and be targeted more precisely, while existing defenses are neutralized – all because of AI systems in the hands of malicious actors.

Current Machine Learning and Endpoint Protection
Today, the most common use of the machine learning (ML) type of AI is found in next-gen endpoint protection systems; that is, the latest anti-malware software. It is called ‘machine learning’ because the AI algorithms within the system ‘learn’ from many millions (and increasing) samples and behavioral patterns of real malware.

Detection of a new pattern can be compared with known bad patterns to generate a probability level for potential maliciousness at a speed and accuracy not possible for human analysts within any meaningful timeframe.

It works – but with two provisos: it depends upon the quality of the ‘learning’ algorithm, and the integrity of the data set from which it learns.

Potential abuse can come in both areas: manipulation or even alteration of the algorithm; and poisoning the data set from which the machine learns.

The report warns, “It has been shown time and again that ML algorithms also have vulnerabilities. These include ML-specific vulnerabilities, such as inducing misclassification via adversarial examples or via poisoning the training data… ML algorithms also remain open to traditional vulnerabilities, such as memory overflow. There is currently a great deal of interest among cyber-security researchers in understanding the security of ML systems, though at present there seem to be more questions than answers.”

The danger is that while these threats to ML already exist, criminals and nation-state actors will begin to use their own ML capabilities to increase the speed and accuracy of attacks against ML defenses.

On data set poisoning, Andy Patel, security advisor at F-Secure, warns, “Diagnosing that a model has been incorrectly trained and is exhibiting bias or performing incorrect classification can be difficult.” The problem is that even the scientists who develop the AI algorithms don’t necessarily understand how they work in the field.

He also notes that malicious actors aren’t waiting for their own ML to do this. “Automated content generation can be used to poison data sets. This is already happening, but the techniques to generate the content don't necessarily use machine learning. For instance, in 2017, millions of auto-generated comments regarding net neutrality were submitted to the FCC.”

The basic conflict between attackers and defenders will not change with machine learning – each side seeks to stay ahead of the other; and each side briefly succeeds. “We need to recognize that new defenses that utilize technology such as AI may be most effective when initially released before bad actors are building countermeasures and evasion tactics intended to circumvent them,” comments Steve Grobman, CTO at McAfee.

Put simply, the cybersecurity industry is aware of the potential malicious use of AI, and is already considering how best to react to it. “Security companies are in a three-way race between themselves and these actors, to innovate and stay ahead, and up until now have been fairly successful,” observes Hal Lonas, CTO at Webroot. “Just as biological infections evolve to more resistant strains when antibiotics are used against them, so we will see malware attacks change as AI defense tactics are used over time.”

Hyrum Anderson, one of the authors of the report, and technical director of data science at Endgame, accepts the industry understands ML can be abused or evaded, but not necessarily the methods that could be employed. “Probably fewer data scientists in infosec are thinking how products might be misused,” he told SecurityWeek; “for example, exploiting a hallucinating model to overwhelm a security analyst with false positives, or a similar attack to make AI-based prevention DoS the system.”

Indeed, even this report failed to mention one type of attack (although there will undoubtedly be others). “The report doesn’t address the dangerous implications of machine learning based de-anonymization attacks,” explains Joshua Saxe, chief data scientist at Sophos. Data anonymization is a key requirement of many regulations. AI-based de-anonymization is likely to be trivial and rapid.

Anderson describes three guidelines that Endgame uses to protect the integrity and secure use of its own ML algorithms. The first is to understand and appropriately limit the AI interaction with the system or endpoint. The second is to understand and limit the data ingestion; for example, anomaly detection that ingests all events everywhere versus anomaly detection that ingests only a subset of ‘security-interesting’ events. In order to protect the integrity of the data set, he suggests, “Trust but verify data providers, such as the malware feeds used for training next generation anti-virus.”

The third: “After a model is built, and before and after deployment, proactively probe it for blind spots. There are fancy ways to do this (including my own research), but at a minimum, doing this manually is still a really good idea.”

Identity
A second area of potential malicious use of AI revolves around ‘identity’. AI’s ability to both recognize and generate manufactured images is advancing rapidly. This can have both positive and negative effects. Facial recognition for the detection of criminal acts and terrorists would generally be consider beneficial – but it can go too far.

“Note, for example,” comments Sophos’ Saxe, “the recent episode in which Stanford researchers released a controversial algorithm that could be used to tell if someone is gay or straight, with high accuracy, based on their social media profile photos.”

“The accuracy of the algorithm,” states the research paper, “increased to 91% [for men] and 83% [for women], respectively, given five facial images per person.” Human judges achieved much lower accuracy: 61% for men and 54% for women. The result is typical: AI can improve human performance at a scale that cannot be contemplated manually.

“Critics pointed out that this research could empower authoritarian regimes to oppress homosexuals,” adds Saxe, “but these critiques were not heard prior to the release of the research.”

This example of the potential misuse of AI in certain circumstances touches on one of the primary themes of the paper: the dual-use nature of, and the role of ‘ethics’ in, the development of artificial intelligence. We look at ethics in more detail below.

A more positive use of AI-based recognition can be found in recent advances in speech recognition and language comprehension. These advances could be used for better biometric authentication – were it not for the dual-use nature of AI. Along with facial and speech recognition there has been a rapid advance in the generation of synthetic images, text, and audio; which, says the report, “could be used to impersonate others online, or to sway public opinion by distributing AI-generated content through social media channels.”

Synthetic image generation

Synthetic image generation in 2014 and 2017

For authentication, Webroot’s Lonas believes we will need to adapt our current authentication approach. “As the lines between machines and humans become less discernible, we will see a shift in what we currently see in authentication systems, for instance logging in to a computer or system. Today, authentication is used to differentiate between various humans and prevent impersonation of one person by another. In the future, we will also need to differentiate between humans and machines, as the latter, with help from AI, are able to mimic humans with ever greater fidelity.”

The future potential for AI-generated fake news is a completely different problem, but one that has the potential to make Russian interference in the 2016 presidential election somewhat pedestrian.

Just last month, the U.S. indicted thirteen Russians and three companies “for committing federal crimes while seeking to interfere in the United States political system.” A campaign allegedly involving hundreds of people working in shifts and with a budget of millions of dollars spread misinformation and propaganda through social networks. Such campaigns could increase in scope with fewer people and far less cost with the use of AI.

In short, AI could be used to make fake news more common and more realistic; or make targeted spear-phishing more compelling at the scale of current mass phishing through the misuse or abuse of identity. This will affect both business cybersecurity (business email compromise, BEC, could become even more effective than it already is), and national security.

The Ethical Problem
The increasing use of AI in cyber will inevitably draw governments into the equation. They will be concerned about more efficient cyber attacks against the critical infrastructure, but will also become embroiled over civil society concerns over their own use of AI in mass surveillance. Since machine learning algorithms become more efficient with the size of the data set from which they learn, the ‘own it all’ mentality exposed by Edward Snowden will become increasingly compelling to law enforcement and intelligence agencies.

The result is that governments will be drawn into the ethical debate about AI and the algorithms it uses. In fact, this process has already started, with the UK’s financial regulator warning that it will be monitoring the use of AI in financial trading.

Governments seek to assure people that its own use of citizens’ big data will be ethical (relying on judicial oversight, court orders, minimal intrusion, and so on). It will also seek to reassure people that business makes ethical use of artificial intelligence – GDPR has already made a start by placing controls over automated user profiling.

While governments often like the idea of ‘self-regulation’ (it absolves them from appearing to be over-proscriptive), ethics in research is never adequately covered by scientists. The report states the problem: “Appropriate responses to these issues may be hampered by two self-reinforcing factors: first, a lack of deep technical understanding on the part of policymakers, potentially leading to poorly-designed or ill-informed regulatory, legislative, or other policy responses; second, reluctance on the part of technical researchers to engage with these topics, out of concern that association with malicious use would tarnish the reputation of the field and perhaps lead to reduced funding or premature regulation.”

There is a widespread belief among technologists that politicians simply don’t understand technology. Chris Roberts, chief security architect at Acalvio, is an example. “God help us if policy makers get involved,” he told SecurityWeek. “Having just read the last thing they dabbled in, I’m dreading what they’d come up with, and would assume it’ll be too late, too wordy, too much crap and red tape. They’re basically five years behind the curve.”

The private sector is little better. Businesses are duty bound, in a capitalist society, to maximize profits for their shareholders. New ideas are frequently rushed to market with little thought for security; and new algorithms will probably be treated likewise.

Oliver Tavakoli, CTO at Vectra, believes that the security industry is obligated to help. “We must adopt defensive methodologies which are far more flexible and resilient rather than fixed and (supposedly) impermeable,” he told SecurityWeek. “This is particularly difficult for legacy security vendors who are more apt to layer on a bit of AI to their existing workflow rather than rethinking everything they do in light of the possibilities that AI brings to the table.”

“The security industry has the opportunity to show leadership with AI and focus on what will really make a difference for customers and organizations currently being pummeled by cyberattacks,” agrees Vikram Kapoor, co-founder and CTO at Lacework. His view is that there are many areas where the advantages of AI will outweigh the potential threats.

“For example,” he continued, “auditing the configuration of your system daily for security best practices should be automated – AI can help. Continuously checking for any anomalies in your cloud should be automated – AI can help there too.”

It would probably be wrong, however, to demand that researchers limit their research: it is the research that is important rather than ethical consideration of potential subsequent use or misuse of the research. The example of Stanford’s sexual orientation algorithm is a case in point.

Google mathematician Thomas Dullien (aka Halvar Flake on Twitter) puts a common researcher view. Commenting on the report, he tweeted, “Dual-use-ness of research cannot be established a-priori; as a researcher, one usually has only the choice to work on ‘useful’ and ‘useless’ things.” In other words, you cannot – or at least should not – restrict research through imposed policy because at this stage, its value (or lack of it) is unknown.

McAfee’s Grobman believes that concentrating on the ethics of AI research is the wrong focus for defending against AI. “We need to place greater emphasis on understanding the ability for bad actors to use AI,” he told SecurityWeek; “as opposed to attempting to limit progress in the field in order to prevent it.”

Summary
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation makes four high-level recommendations “to better forecast, prevent, and mitigate” the evolving threats from unconstrained artificial intelligence. They are: greater collaboration between policymakers and researchers (that is, government and industry); the adoption of ethical best practices by AI researchers; a methodology for handling dual-use concerns; and an expansion of the stakeholders and domain experts involved in discussing the issues.

Although the detail of the report makes many more finely-grained comments, these high-level recommendations indicate there is no immediately obvious solution to the threat posed by AI in the hands of cybercriminals and nation-state actors.

Indeed, it could be argued that there is no solution. Just as there is no solution to the criminal use of encryption – merely mitigation – perhaps there is no solution to the criminal use of AI – just mitigation. If this is true, defense against the criminal use of AI will be down to the very security vendors that have proliferated the use of AI in their own products.

It is possible, however, that the whole threat of unbridled artificial intelligence in the cyber world is being over-hyped.

F-Secure’s Patel comments, “Social engineering and disinformation campaigns will become easier with the ability to generate ‘fake’ content (text, voice, and video). There are plenty of people on the Internet who can very quickly figure out whether an image has been photoshopped, and I’d expect that, for now, it might be fairly easy to determine whether something was automatically generated or altered by a machine learning algorithm.

“In the future,” he added, “if it becomes impossible to determine if a piece of content was generated by ML, researchers will need to look at metadata surrounding the content to determine its validity (for instance, timestamps, IP addresses, etc.).”

In short, Patel’s suggestion is that AI will simply scale, in quality and quantity, the same threats that are faced today. But AI can also scale and improve the current defenses against those threats.

“The fear is that super powerful machine-learning-based fuzzers will allow adversaries to easily and quickly find countless zero-day vulnerabilities. Remember, though, that these fuzzers will also be in the hands of the white hats… In the end, things will probably look the same as they do now.”


Microsoft Patches for Meltdown Introduced Severe Flaw: Researcher
28.3.2018 securityweek
Vulnerebility

Some of the Windows updates released by Microsoft to mitigate the Meltdown vulnerability introduce an even more severe security hole, a researcher has warned.

Microsoft has released patches for the Meltdown and Spectre vulnerabilities every month since their disclosure in January. While at this point the updates should prevent these attacks, a researcher claims some of the fixes create a bigger problem.

According to Ulf Frisk, the updates released by Microsoft in January and February for Windows 7 and Windows Server 2008 R2 patch Meltdown, but they allow an attacker to easily read from and write to memory.

He noted that while Meltdown allows an attacker to read megabytes of data per second, the new vulnerability can be exploited to read gigabytes of data per second – in one of the tests he conducted, the expert managed to access the memory at speeds of over 4 Gbps. Moreover, the flaw also makes it possible to write to memory.

Frisk says exploitation does not require any sophisticated exploits – standard read and write instructions will get the job done – as Windows 7 has already mapped the memory for each active process.

“In short - the User/Supervisor permission bit was set to User in the PML4 self-referencing entry. This made the page tables available to user mode code in every process. The page tables should normally only be accessible by the kernel itself,” the researcher explained. “The PML4 is the base of the 4-level in-memory page table hierarchy that the CPU Memory Management Unit (MMU) uses to translate the virtual addresses of a process into physical memory addresses in RAM.”

“Once read/write access has been gained to the page tables it will be trivially easy to gain access to the complete physical memory, unless it is additionally protected by Extended Page Tables (EPTs) used for Virtualization. All one have to do is to write their own Page Table Entries (PTEs) into the page tables to access arbitrary physical memory,” he said.

The researcher says anyone can reproduce the vulnerability using a direct memory access (DMA) attack tool he developed a few years ago. The attack works against devices running Windows 7 x64 or Windows Server 2008 R2 with the Microsoft patches from January or February installed. The issue did not exist before January and it appears to have been addressed by Microsoft with the March updates. Windows 10 and Windows 8.1 are not affected, Frisk said.

SecurityWeek has reached out to Microsoft for comment and will update this article if the company responds.

Frisk previously discovered a macOS vulnerability that could have been exploited to obtain FileVault passwords, and demonstrated some UEFI attacks.


Kaspersky Open Sources Internal Distributed YARA Scanner
28.3.2018 securityweek Security

Kaspersky Lab has released the source code of an internally-developed distributed YARA scanner as a way of giving back to the infosec community.

Originally developed by VirusTotal software engineer Victor Alvarez, YARA is a tool that allows researchers to analyze and detect malware by creating rules that describe threats based on textual or binary patterns.

Kaspersky Lab has developed its own version of the YARA tool. Named KLara, the Python-based application relies on a distributed architecture to allow researchers to quickly scan large collections of malware samples.

Looking for potential threats in the wild requires a significant amount of resources, which can be provided by cloud systems. Using a distributed architecture, KLara allows researchers to efficiently scan one or more YARA rules over large data collections – Kaspersky says it can scan 10Tb of files in roughly 30 minutes.

“The project uses the dispatcher/worker model, with the usual architecture of one dispatcher and multiple workers. Worker and dispatcher agents are written in Python. Because the worker agents are written in Python, they can be deployed in any compatible ecosystem (Windows or UNIX). The same logic applies to the YARA scanner (used by KLara): it can be compiled on both platforms,” Kaspersky explained.

KLara provides a web-based interface where users can submit jobs, check their status, and view results. Results can also be sent to a specified email address.

The tool also provides an API that can be used to submit new jobs, get job results and details, and retrieve the matched MD5 hashes.

Kaspersky Lab has relied on YARA in many of its investigations, but one of the most notable cases involved the 2015 Hacking Team breach. The security firm wrote a YARA rule based on information from the leaked Hacking Team files, and several months later it led to the discovery of a Silverlight zero-day vulnerability.

The KLara source code is available on GitHub under a GNU General Public License v3.0. Kaspersky says it welcomes contributions to the project.

This is not the first time Kaspersky has made available the source code of one of its internal tools. Last year, it released the source code of Bitscout, a compact and customizable tool designed for remote digital forensics operations.


Facebook Announces New Steps to Protect Users' Privacy
28.3.2018 securityweek
Social

Facebook Revamps Privacy Settings Amid Data Breach Outcry

Facebook on Wednesday unveiled new privacy settings aiming to give its users more control over how their data is shared, following an outcry over hijacking of personal information at the giant social network.

The updates include easier access to Facebook's user settings and tools to easily search for, download and delete personal data stored by Facebook.

Facebook said a new privacy shortcuts menu will allow users to quickly increase account security, manage who can see their information and activity on the site and control advertisements they see.

"We've heard loud and clear that privacy settings and other important tools are too hard to find and that we must do more to keep people informed," chief privacy officer Erin Egan and deputy general counsel Ashlie Beringer said in a blog post.

"We're taking additional steps in the coming weeks to put people more in control of their privacy."

The new features follow fierce criticism after it was revealed millions of Facebook users' personal data was harvested by a British firm linked to Donald Trump's 2016 presidential campaign -- although Facebook said the changes have been "in the works for some time."

Earlier this month, whistleblower Christopher Wylie revealed political consulting company Cambridge Analytica obtained profiles on 50 million Facebook users via an academic researcher's personality prediction app.

The app was downloaded by 270,000 people, but also scooped up their friends' data without consent -- as was possible under Facebook's rules at the time.

Egan and Beringer also announced updates to Facebook's terms of service and data policy to improve transparency about how the site collects and uses data.

Deepening tech crisis

Facebook's move comes as authorities around the globe investigate how Facebook handles and shares private data, and with its shares having tumbled more than 15 percent, wiping out tens of billions in market value.

The crisis also threatens the Silicon Valley tech industry whose business model revolves around data collected on internet users.

On Tuesday, tech shares led a broad slump on Wall Street, with an index of key tech stocks losing nearly six percent.

The US Federal Trade Commission this week said it had launched a probe into whether the social network violated consumer protection laws or a 2011 court-approved agreement on protecting private user data.

US lawmakers were seeking to haul Facebook CEO Mark Zuckerberg to Washington to testify on the matter.

Authorities in Britain have seized data from Cambridge Analytica in their investigation, and EU officials have warned of consequences for Facebook.

Facebook has apologized for the misappropriation of data and vowed to fix the problem. Facebook took out full-page ads in nine major British and US newspapers on Sunday to apologize to users.

"We have a responsibility to protect your information. If we can't we don't deserve it," Zuckerberg said in the ads.


Critical Flaws Found in Siemens Telecontrol, Building Automation Products
28.3.2018 securityweek
Vulnerebility

Siemens informed customers this week that critical vulnerabilities have been found in some of its telecontrol and building automation products, and revealed that some SIMATIC systems are affected by a high severity flaw.

One advisory published by the company describes several critical and high severity flaws affecting Siveillance and Desigo building automation products. The security holes exist due to the use of a vulnerable version of a Gemalto license management system (LMS).

The bugs affect Gemalto Sentinel LDK and they can be exploited for remote code execution and denial-of-service (DoS) attacks.

The vulnerabilities were discovered by researchers at Kaspersky Lab and disclosed in January. The security firm warned at the time that millions of industrial and corporate systems may be exposed to remote attacks due to their use of the vulnerable Gemalto product.

Siemens warned at the time that more than a dozen versions of the SIMATIC WinCC Add-On were affected. The company has now informed customers that some of its building automation products are impacted as well, including Siveillance Identity and SiteIQ Analytics, and Desigo XWP, CC, ABT, Configuration Manager, and Annual Shading.

The German industrial giant has advised customers to update the LMS to version 2.1 SP4 (2.1.681) or newer in order to address the vulnerabilities.

A separate advisory published by Siemens this week informs customers of a critical vulnerability affecting TIM 1531 IRC, a communication module launched by the company nearly a year ago. The module connects remote stations based on SIMATIC controllers to a telecontrol control center through the Sinaut ST7 protocol.

“A remote attacker with network access to port 80/tcp or port 443/tcp could perform administrative operations on the device without prior authentication. Successful exploitation could allow to cause a denial-of-service, or read and manipulate data as well as configuration settings of the affected device,” Siemens explained.

The company said there had been no evidence of exploitation when it published its advisory on Tuesday.

A third advisory published by Siemens this week describes a high severity flaw discovered by external researchers in SIMATIC PCS 7, SIMATIC WinCC, SIMATIC WinCC Runtime Professional, and SIMATIC NET PC products.

The vulnerability allows an attacker to cause a DoS condition on the impacted products by sending specially crafted messages to their RPC service. Patches or mitigations have been made available by Siemens for each of the affected systems.


jRAT Leverages Crypter Service to Stay Undetected
28.3.2018 securityweek
Virus

In recently observed attacks, the jRAT backdoor was using crypter services hosted on the dark web to evade detection, Trustwave security researchers have discovered.

Also known as Adwind, AlienSpy, Frutas, Unrecom, and Sockrat, the jRAT malware is a Windows-based Remote Access Trojan (RAT) discovered several years ago that has already infected nearly half a million users between 2013 and 2016. The threat has been hitting organizations all around the world and was recently spotted as part of an ongoing campaign.

jRAT allows its operators to control it remotely to achieve complete control of the infected system. With the help of this backdoor, attackers can capture keystrokes, exfiltrate credentials, take screenshots, and access the computer’s webcam, in addition to executing binaries on the victim’s system.

“It is highly configurable to whatever the attacker's motive may be. jRAT has been commercially available to the public as a RAT-as-a-service business model for as little as $20 for a one-month use,” Trustwave notes.

Starting early this year, Trustwave security researchers observed a spike in spam messages delivering the malware and also noticed that security reports tend to misclassify the Java-based RAT due to the use of said crypter service.

The malware was being distributed through malicious emails carrying either an attachment or a link. The emails would pose as invoices, quotation requests, remittance notices, shipment notifications, and payment notices.

The recently analyzed samples, the researchers say, revealed that the same tool or service was used to obfuscate all of them. Furthermore, all of them attempted to download a JAR file from a Tor domain that turned out to be a service hosted by QUAverse.

QUAverse (QUA) is linked to QRAT, a RAT-as-a-service platform developed in 2015 which is seen as one of jRAT's competitors. The presence of these artifacts were able to set investigators on the wrong path, but the de-obfuscated and decrypted samples were found to be indeed jRAT samples.

What Trustwave discovered was that jRAT uses a service from QUAverse called Qrypter. This is a Crypter-as-a-Service platform that makes Java JAR applications fully undetectable by morphing variants of the same file. For a certain fee, the service morphs a client's JAR file periodically to avoid being detected by antivirus products.

“We believe that the service monitors multiple AV products pro-actively and once it determines that the malware variant is being detected, it then re-encrypts the file thus producing a new mutant variant that is undetectable for a certain time period,” Trustwave notes.

When executed, jRAT downloads a new, undetectable copy of itself from the service and drops it on the infected machine's %temp% directory. The malware then executes and installs the newly crypted jar file.

By using the Qrypter service, the backdoor leverages a third-party crypter feature that should allow it to become fully undetectable, the security researchers point out.

“While jRAT actors have been actively spamming malicious JAR files for several months, one of the hurdles in infecting their target is how easily they are being detected. Perhaps using the Qrypter service makes it easier for them to evade email gateways and antivirus engines,” Trustwave notes.


Pink-haired Whistleblower at Heart of Facebook Scandal
28.3.2018 securityweek
Social

Instantly recognizable with his pink hair and nose ring, Christopher Wylie claims to have helped create data analysis company Cambridge Analytica before turning whistleblower and becoming "the face" of the crisis engulfing Facebook.

Carole Cadwalladr, the Guardian journalist who worked with Wylie for a year on the story, described him as "clever, funny, bitchy, profound, intellectually ravenous, compelling. A master storyteller. A politicker. A data science nerd."

The bespectacled 28-year-old describes himself as "the gay Canadian vegan who somehow ended up creating Steve Bannon's psychological warfare tool," referring to Trump's former adviser, whom the report said had deep links with Cambridge Analytica (CA).

With Wylie's help, Cadwalladr revealed how CA scooped up data from millions of Facebook users in the US.

They then used the information to build political and psychological profiles, in order to create targeted messages for voters.

Facebook insists it did not know the data taken from its site were being used, but the revelations have raised urgent questions over how data of 50 million users ended up in CA's hands.

Shares of the tech giant have since tumbled, with $70 billion (56 billion euros) wiped off in 10 days.

- 'Walter Mitty' -

Wylie studied law and then fashion, before entering the British political sphere when he landed a job working for the Liberal Democrats.

Former Lib Dem colleague Ben Rathe had a less complementary description of Wylie, tweeting that he "thinks he's Edward Snowden, when he's actually Walter Mitty" -- a reference to a fictional character with a vivid fantasy life.

Wylie became a research director for Strategic Communication Laboratories (SCL), the parent company of CA, in 2014.

"I helped create that company," he said of CA in an interview with several European newspapers.

"I got caught up in my own curiosity, in the work I was doing. It's not an excuse, but I found myself doing the research work I wanted to do, with a budget of several million, it was really very tempting," he told French daily Liberation.

Initially, he enjoyed the globetrotting lifestyle, meeting with ministers from around the world.

But the job took a dark turn when he discovered that his predecessor had died in a Kenyan hotel. He believes the victim paid the price when a "deal went sour".

"People suspected poisoning," he told a British parliamentary committee investigating "fake news" on Tuesday.

- 'Repair Facebook!' -

His appearance before MPs saw him swap his usual loud T-shirts for a sober suit and tie, producing hours of testimony against the firm that he left in 2014.

He said he eventually decided to speak out after US President Donald Trump's shock election victory, which he partly attributed to the misuse of personal data for political purposes.

Cambridge Analytica vigorously denies the charges levelled against it, saying that Wylie was merely "a part-time employee who left his position in July 2014" and had no direct knowledge of how the firm had operated since.

Wylie urged British MPs to dig deeper into the story, insisting that his concern was not political and was focussed on abuses in the democratic process -- including during the Brexit referendum campaign.

"I supported Leave, despite having pink hair and my nose ring," he said.

He claimed that various pro-Brexit organisations worked together to get around campaign finance rules, using the services of Aggregate IQ, a Canadian company linked to the SCL group.

Wylie believes that it is "very reasonable" to say that CA's activities may have swung the Brexit vote, although he stressed he was not anti-Facebook, anti-social media or anti-data.

"I don't say 'delete Facebook', but 'repair Facebook'," he told the European newspapers.

However, he admitted to MPs that he had "become the face" of the scandal.


Mozilla Isolates Facebook with New Firefox Extension
28.3.2018 securityweek
Social

Mozilla today unveiled the "Facebook Container Extension", a new browser extension designed to help Firefox users reduce the ability of Facebook to track their activity across other web sites.

The new extension, Mozilla says, will help users gain more control over their data on the social platform by isolating their identity into a separate container. Because of that, Facebook would find it more difficult to track users’ activity on other websites via third-party cookies.

The Facebook Container Add-On was launched in the light of news that Facebook at one point allowed applications to harvest large amounts of data on users and their friends and follows Mozilla’s announcement that it has paused Facebook advertising until the social network improves the privacy of its users.

The privacy scandal started with reports that Cambridge Analytica, the data analysis firm hired by Donald Trump's 2016 presidential campaign, harvested 50 million Facebook users’ profiles without their permission.

The social network has been under heavy fire since last week, when the news broke, despite suspending the firm’s account. Many are losing trust in the platform and the use of Facebook data to target voters triggered global outrage.

This is what determined Mozilla last week to pause Facebook advertising, despite Mark Zuckerberg’s assurance that steps will be taken to ensure a situation like the Cambridge Analytica one won’t happen again.

“Facebook knows a great deal about their two billion users — perhaps more intimate information than any other company does. They know everything we click and like on their site, and know who our closest friends and relationships are,” Mozilla said last week.

Now, the browser maker says that users can enjoy both their time on Facebook and navigating on other websites they like. For that to happen, users should have tools that limit the data that others can collect on them, Mozilla included. Because of that, the browser won’t collect data from the use of the Facebook Container extension, except for information on how many times the extension is installed or removed.

The new extension, Mozilla claims, should provide users with the means to protect themselves from any side effects of usage.

“The type of data in the recent Cambridge Analytica incident would not have been prevented by Facebook Container. But troves of data are being collected on your behavior on the internet, and so giving users a choice to limit what they share in a way that is under their control is important,” the browser maker notes.

When installed, the extension deletes the user’s Facebook cookies and logs them out of the social platform. The next time they visit Facebook, the website will open in a new blue-colored browser tab (a container tab).

Users will be able to log into Facebook and use it like they would normally do. When clicking on a non-Facebook link or navigating to a non-Facebook website in the URL bar, those pages load outside of the container.

When clicking on Facebook Share buttons on other browser tabs, the extension loads them within the Facebook container. However, when the buttons are clicked, Facebook receives information on the website that the user shared from.

“If you use your Facebook credentials to create an account or log in using your Facebook credentials, it may not work properly and you may not be able to login. Also, because you’re logged into Facebook in the container tab, embedded Facebook comments and Like buttons in tabs outside the Facebook container tab will not work,” Mozilla explains.

Because of that, Facebook can’t associate information about the activity of the user on websites outside of the platform to their Facebook identity. Thus, the social network won’t be able to use the activity collected off Facebook to send ads and other targeted messages.

“There’s a lot of value in your social data. It’s important to regularly review your privacy settings on all sites and applications that use it. The EFF has useful advice on how to keep your data where you want it to be, under more of your control,” Mozilla notes.

Facebook isn’t the only firm to collect data from user’s activity outside of the core service, but this is a problem that can be solved quickly. Thus, users are advised to review their privacy settings for each app they use regularly.


A flaw in the iOS camera QR code URL parser could expose users to attacks
28.3.2018 securityaffairs iOS

A vulnerability in the iOS Camera App could be exploited by hackers to redirect users to a malicious website, the issue affects the built-in QR code reader.
The iOS Camera App is affected by a bug that could be exploited by hackers to redirect users to a malicious website, the issue resides in the built-in QR code reader.

The flaw affects the latest Apple iOS 11 for iPhone, iPad, and iPod touch devices.

The problem ties a new feature that was implemented in iOS 11 to allow users to automatically read QR codes while using the camera app without requiring any third-party QR code reader app.

To read a QR code, users need to open the Camera app on their Apple devices and point the iPhone or the iPad at a QR code, in this way if the code an URL, the system will give the users a notification with the link address. Tapping the notification the users can visit the URL in Safari browser, but according to the security researcher Roman Mueller who discovered the vulnerability, the URL visited could be changed.

The expert discovered that the URL parser of built-in QR code reader for iOS camera app doesn’t correctly detect the hostname in the URL making it possible to change the displayed URL in the notification and hijacking to users to malicious websites.

“The URL parser of the camera app has a problem here detecting the hostname in this URL in the same way as Safari does.” wrote the expert in a blog post.

“It probably detects “xxx\” as the username to be sent to “facebook.com:443”.
While Safari might take the complete string “xxx\@facebook.com” as a username and “443” as the password to be sent to infosec.rm-it.de.”
“This leads to a different hostname being displayed in the notification compared to what actually is opened in Safari.”

Mueller created a QR code containing the following URL:

https://xxx\@facebook.com:443@infosec.rm-it.de/

When he scanned it he noticed that the device was showing the following notification:

Open “facebook.com” in Safari

Once tapped it opened https://infosec.rm-it.de/ instead Facebook.

The expert successfully tested the issue on his iPhone X running iOS 11.2.6.

QR code hack

The researcher had already reported this flaw to Apple in December last year, but Apple hasn’t yet fixed the bug to the date.

The bug is very dangerous and opens the doors to numerous attack scenarios.

Mueller reported the vulnerability to the Apple security team on 2017-12-23, but at the time I was writing the flaw is still present.


VPN leaks users’ IPs via WebRTC. I’ve tested seventy VPN providers and 16 of them leaks users’ IPs via WebRTC (23%)
28.3.2018 securityaffairs Cyber

Cyber security researcher Paolo Stagno (aka VoidSec) has tested seventy VPN providers and found 16 of them leaks users’ IPs via WebRTC (23%)
You can check if your VPN leaks visiting: http://ip.voidsec.com
Here you can find the complete list of the VPN providers that I’ve tested: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0
Add a comment or send me a tweet if you have updated results for any of the VPN which I am missing details. (especially the “$$$” one, since I cannot subscribe to 200 different paid VPN services :P)
Some time ago, during a small event in my city, I’ve presented a small research on “decloaking” the true IP of a website visitor (ab)using the WebRTC technology.

What is WebRTC?
WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs.

It includes the fundamental building blocks for high-quality communications on the web, such as network, audio and video components used in voice and video chat applications, these components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to easily implement their own RTC web app.

STUN/ICE
Is a component allowing calls to use the STUN and ICE mechanisms to establish connections across various types of networks? The STUN server sends a pingback that contains the IP address and port of the client

These STUN (Session Traversal Utilities for NAT) servers are used by VPNs to translate a local home IP address to a new public IP address and vice-versa. To do this, the STUN server maintains a table of both your VPN-based public IP and your local (“real”) IP during connectivity (routers at home replicate a similar function in translating private IP addresses to public and back.).

WebRTC allows requests to be made to STUN servers which return the “hidden” home IP-address as well as local network addresses for the system that is being used by the user.

The results of the requests can be accessed using JavaScript, but because they are made outside the normal XML/HTTP request procedure, they are not visible in the developer console.

The only requirement for this de-anonymizing technique to work is WebRTC and JavaScript support from the browser.

VPN and WebRTC
This functionality could be also used to de-anonymize and trace users behind common privacy protection services such as: VPN, SOCKS Proxy, HTTP Proxy and in the past (TOR users).

Browsers that have WebRTC enabled by default:

Mozilla Firefox
Google Chrome
Google Chrome on Android
Internet (Samsung Browser)
Opera
Vivaldi
23% of the tested VPNs and Proxies services disclosed the real IP address of the visitors making the users traceable.

The following providers leaks users’ IP:

BolehVPN (USA Only)
ChillGlobal (Chrome and Firefox Plugin)
Glype (Depends on the configuration)
hide-me.org
Hola!VPN
Hola!VPN Chrome Extension
HTTP PROXY navigation in browser that support Web RTC
IBVPN Browser Addon
PHP Proxy
phx.piratebayproxy.co
psiphon3 (not leaking if using L2TP/IP)
PureVPN
SOCKS Proxy on browsers with Web RTC enabled
SumRando Web Proxy
TOR as PROXY on browsers with Web RTC enabled
Windscribe Add-ons
VPN

You can find the complete spreadsheet of tested VPN providers here: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0

Add a comment or send me a tweet if you have updated results for any of the VPN which I am missing details. (especially the “$$$” one, since I cannot subscribe to 200 different paid VPN services :P)

Stay anonymous while surfing:
Some tips to follow in order to protect your IP during the internet navigation:

Disable WebRTC
Disable JavaScript (or at least some functions. Use NoScript)
Disable Canvas Rendering (Web API)
Always set a DNS fallback for every connection/adapter
Always kill all your browsers instances before and after a VPN connection
Clear browser cache, history, and cookies
PoC:
You can check if your VPN leaks through this POC: http://ip.voidsec.com

PoC Code:
I’ve updated Daniel Roesler code in order to make it works again and you can find it on Github.

Original post:

VPN Leak

TL:DR: VPN leaks users’ IPs via WebRTC. I’ve tested seventy VPN providers and 16 of them leaks users’ IPs via WebRTC (23%) You can check if your VPN leaks visiting: http://ip.voidsec.com Here you can find the complete list of the VPN providers that I’ve tested: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0 Add a comment or send me a tweet if …