Privacy  Articles - H  2020 1   Privacy  List - H  2021  2020  2019  2018  2017  2016  1 


France Fines Google, Facebook €210 Million Over Privacy Violating Tracking Cookies
14.1.2022
Privacy Thehackernews
The Commission nationale de l'informatique et des libertés (CNIL), France's data protection watchdog, has slapped Facebook (now Meta Platforms) and Google with fines of €150 million ($170 million) and €60 million ($68 million) for violating E.U. privacy rules by failing to provide users with an easy option to reject cookie tracking technology.

"The websites facebook.com, google.fr and youtube.com offer a button allowing the user to immediately accept cookies," the authority said. "However, they do not provide an equivalent solution (button or other) enabling the Internet user to easily refuse the deposit of these cookies."

Facebook told TechCrunch that it was reviewing the ruling, while Google said it's working to change its practices in response to the CNIL fines.

HTTP cookies are small pieces of data created while a user is browsing a website and placed on the user's computer or other device by the user's web browser to track online activity across the web and store information about the browsing sessions, including logins and details entered in form fields such as names and addresses.

Specifically, the CNIL found fault with the manner in which the two platforms require several clicks to reject all cookies, as opposed to having a single override to refuse all of them, effectively making it harder to reject cookies than to accept them.

In order to refuse cookies, Facebook makes users click a button titled "Accept Cookies". You have to laugh. pic.twitter.com/5b6cpQ2Usd

— Mark Di Stefano (@MarkDiStef) January 6, 2022
This dark pattern affects the freedom of consent, the data protection agency said, adding the fact that users don't have a better choice when it comes to rejecting cookies as easily as they can accept them steers their choice in favor of consent.

Along with imposing monetary penalties against Google and Meta, the CNIL has also ordered the tech giants to alter how they currently present cookie choices and provide users in the country with a simple means of refusing cookies within three months, or risk facing further fines of €100,000 per day of delay.

While the fines won't make much of a dent in either company's revenues, this is not the first time European authorities have acted to punish Big Tech for contravening E.U. regulations. In December 2020, the regulator levied Google €100 million and Amazon Europe €35 million for having placed advertising cookies on users' devices without seeking their prior consent.

Then in November 2021, Italy's competition authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), fined Apple and Google €10 million each for not providing clear and immediate information on the acquisition and use of user data for commercial purposes during the account creation phase.


EU, US Make New Attempt for Data Privacy Deal
27.3.2021 
Privacy  Securityweek

Europe and the United States will use a thaw in ties to strike a pact that would allow for the exchange of private data across the Atlantic, replacing previous agreements struck down by an EU court.

Facebook, Google, Microsoft and thousands of other companies want such a deal to keep the internet traffic flowing without facing significant legal jeopardy over European privacy laws.

Last year, the European Court of Justice "raised important questions on how to ensure protection of privacy when data crosses the Atlantic," EU Justice Commissioner Didier Reynders said in a speech to the American Chamber of Commerce to the EU.

"Finding this solution is a priority in Brussels and in Washington DC," he added a day after stepping up talks with US Commerce Secretary Gina Raimondo.

As "like-minded partners" the two sides "should be able to find appropriate solutions on principles that are cherished on both sides of the Atlantic," he said.

The third attempt for a new data arrangement would succeed deals that were invalidated after succesful lawsuits arguing that US security laws violated the fundamental rights of EU citizens.

The legal onslaught was led by Max Schrems, an Austrian activist and lawyer who began his campaign after the revelations by Edward Snowden of mass digital spying by US agencies.

Businesses have since resorted to legally uncertain workarounds to keep the data flow moving, with hope that the two sides could come up with something stronger in the long term.

Reynders said a deal would require that "complex and sensitive" issues are solved "that relate to the delicate balance between national security and privacy".

The deal would have to cover important issues, including guarantees of access to courts and clearly enforceable individual rights.

"The only way to achieve this is to develop a new arrangement that is fully compliant with the (EU court's) Schrems II judgement. This is in our mutual interest," Reynders added.

The EU has concluded similar agreements with 12 entities and countries, including Japan, Switzerland, Canada, Israel, and is in the process of concluding negotiations with South Korea.

In February, Brussels gave an initial green light to the transfer of personal data to the UK, which left the EU's direct jurisdiction this year after a post-Brexit transition period.


Firefox 87 Adds Stronger User Privacy Protections
24.3.2021 Privacy  Securityweek

Mozilla today announced the release of Firefox 87 in the stable channel fitted with a new intelligent tracker blocking mechanism.

Called SmartBlock, the feature works in Firefox Private Browsing and Strict Mode and is meant to improve users’ browsing experience through fixing pages that Mozilla’s tracking protections break.

Firefox has had a built-in Content Blocking feature since 2015, providing increased protections to those who use Private Browsing windows and Strict Tracking Protection Mode. The feature was designed to block third-party scripts, images, and other content if loaded from known cross-site tracking companies.

Thus, Firefox Private Browsing windows could prevent these companies from tracking users across the web, but the privacy protections often resulted in the blocking of components essential for the proper functioning of some websites.

Some of the effects users have been experiencing include poor website performance, images that would not appear on the web page, certain features not working, and even pages that would fail to load entirely.

“To reduce this breakage, Firefox 87 is now introducing a new privacy feature we are calling SmartBlock. SmartBlock intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy,” Mozilla announced.

To improve user experience, SmartBlock provides local stand-ins for the third-party tracking scripts that are blocked. Designed to “behave just enough like the original ones,” these scripts ensure that websites load and that their functionality is intact.

With the SmartBlock stand-ins bundled with Firefox, no third-party tracking content is loaded, thus fully preventing potential tracking attempts. SmartBlock automatically replaces specific common scripts that are classified as trackers on the Disconnect Tracking Protection List.

The new browser release also brings along a stricter, more privacy-focused Referrer Policy, where the browser, by default, “will trim path and query string information from referrer headers to prevent sites from accidentally leaking sensitive user data.”

HTTP Referrer headers that browsers send to websites (such as the full URL of the referring document) with navigation or subresource requests may include information that could be used for analytics, logging, or cache optimization, caching, but also private user data, including details on a user’s account on a website.

The Referrer Policy was meant to provide a mechanism for websites to protect their users’ privacy, but there are websites that haven’t set a referrer policy, which results in browsers defaulting to ‘no-referrer-when-downgrade’ policy: they send full query information except for when navigating to a less secure destination.

Firefox 87 sets the default Referrer Policy to ‘strict-origin-when-cross-origin’, meaning that user sensitive information that is accessible in the URL will always be trimmed, for all “navigational requests, redirected requests, and subresource (image, style, script) requests.” The new policy will be enforced automatically upon updating to Firefox 87.


Millions of People Can Lose Sensitive Data through Travel Apps, Privacysavvy reports
19.3.2021
Privacy  Securityaffairs

According to a report published by researchers at PrivacySavvy, many travel companies expose users’ data through their booking apps.
In a report published on the 16th of March by PrivacySavvy, many travel companies expose users’ data through their booking apps. PrivacySavvy is a digital security company on a mission to educate internet users on issues concerning their digital lives’ privacy.

During a 2021 “apps mapping project,” they discovered that travel apps are not as secured as they should for the millions of people who use them.

According to the team, the apps mapping project aims to facilitate the safety of web applications that people use every day.

Popular Travel Apps Expose Users

In the research led by two PrivacySavvy researchers, Huynh Chen and Sarmad Khan, the team tested 20 popular travel apps.

During the test, the researchers aimed at understanding how these companies manage user’s security and privacy risks. Unfortunately, they discovered that these leading apps lacked the basic security measures to protect their users’ data.

Most of the popular travel apps are exposing their users by enabling third-party access to their servers. Since they leave these servers open, users’ data is exposed to anyone interested in gathering such data.

PrivacySavvy fears that nefarious third-parties could hack users’ accounts and do away with sensitive information if these companies fail or ignore what the team called “server-side security vulnerabilities.”

The Huynh and Sarmad led research team discovered that these travel apps are not upholding their operations’ security standards based on their evaluation. More importantly, PrivacySavvy found that these vulnerabilities were more prominent in the apps’ subdomains.

More Than 100 Million Users Could be Compromised

Based on the PrivacySavvy report, up to 105 million travel app users are susceptible to losing sensitive information if hackers target the apps. The researchers withheld the names of the specific travel apps they tested due to legal issues and possible compromise if hackers accost such information.

However, the team revealed that they picked the apps based on the number of downloads and positive reviews. Also, they disclosed that they concentrated their investigations on mainly booking and ride-sharing apps. But they didn’t evaluate apps belonging to car rentals, individual hotels, and airlines.

Fortunately, they confirmed that not all the apps evaluated had “server-side security vulnerabilities.” And while some of the affected companies have rectified the issues, many are yet to do so.

Consequences of Server-side Security Vulnerabilities

One of the main reasons behind the investigation is to prevent sensitive data exposure. According to PrivacySavvy, acute data exposure is when a company, an entity, or an app exposes users’ data carelessly.

Many people are more familiar with the data breach, but it is different from data exposure. A data breach occurs due to a hacker’s attack aimed at stealing users’ data from a company, app, or entity.

But sensitive data exposure is when users’ data becomes publicly accessible because the owner failed to put safety measures in place to protect the database. Many factors may contribute to private data exposure, such as software flaws, zero encryption, or weak encryption.

In such cases, some of the data that could be exposed includes:

Bank account numbers
Phone numbers
Home addresses
Credit card details
Healthcare data
Dates of birth
Session tokens
Usernames & Passwords, etc.
The server-side vulnerability in these evaluated travel apps can expose the above-listed information to anyone who exploits it. Since the vulnerabilities are in their subdomains, a wrongdoer can pass through them to pull the .git directory, collect sensitive information, and carry out a sophisticated attack on the database.

How to Avoid Data Exposure
According to the PrivacySavvy researchers, both the companies and the users have some roles in preventing data exposure.

First of all, companies should:

Secure both their main and subdomains
Protecting files with sensitive information
Never storing files on its production servers
Using the suitable access rules
Shutting down systems without authentication requirements after use
For the users, the research team recommends that they should contact the travel companies they’ve used recently to know if they’re in any way exposing their sensitive information. With that, they can galvanize them into actions to fix any such vulnerabilities.


Facial Recognition Company Sued by California Activists
12.3.2021
Privacy  Securityweek

Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission.

The lawsuit, filed Tuesday in Alameda County Superior Court in the San Francisco Bay Area, contends that the New York-based firm violates California’s constitution and seeks an injunction to bar it from collecting biometric information in California and requiring it to delete data on Californians.

The lawsuit says the company has built “the most dangerous” facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies, and has amassed a database nearly seven times larger than the FBI’s.

The lawsuit was filed by four activists and the groups Mijente and Norcal Resist, who have supported causes such as Black Lives Matter and have been critical of the policies of U.S. Immigration and Customs Enforcement, which has a contract with Clearview AI.

“Clearview has provided thousands of governments, government agencies, and private entities access to its database, which they can use to identify people with dissident views, monitor their associations, and track their speech,” the lawsuit contends.

The lawsuit said Clearview AI scrapes dozens of internet sites, such as Facebook, Twitter, Google and Venmo, to gather facial photos. Scraping involves the use of computer programs to automatically scan and copy data, which the lawsuit says is analyzed by Clearview AI to identify individual biometrics such as eye shape and size that are then put into a “faceprint” database that clients can use to ID people.

The images scraped include those posted not only by individuals and their family and friends but also those of people who are inadvertently captured in the background of strangers’ photos, according to the lawsuit.

The company also offers its services to law enforcement even in cities that ban the use of facial recognition, the lawsuit alleges.

Several cities around the country, including the Bay Area cities of Alameda, San Francisco, Oakland and Berkeley, have limited or banned the use of facial recognition technology by local law enforcement.

“Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment,” said a statement from attorney Floyd Abrams, representing the company.

The company has said it saw law enforcement use of its technology jump 26% following January’s deadly riot at the U.S. Capitol.

Facial recognition systems have faced criticism because of their mass surveillance capabilities, which raise privacy concerns, and because some studies have shown that the technology is far more likely to misidentify Blacks and other people of color than whites, which has resulted in mistaken arrests.

However, Clearview AI’s CEO, Hoan Ton-That, said in a statement that “an independent study has indicated the Clearview AI has no racial bias.”

“As a person of mixed race, having non-biased technology is important to me,” he said.

He also argued that the use of accurate facial recognition technology can reduce the chance of wrongful arrests.

The lawsuit said Facebook, Twitter, Google and other social media firms have asked Clearview AI to stop scraping images because it violated their terms of service with users.

Clearview AI also is facing other challenges. A lawsuit filed in Illinois alleges the company violates that state’s biometric privacy act, while privacy watchdogs in both Canada and the European Union have issued statements of concern.

Clearview stopped operations in Canada last year. But privacy commissioners this year asked the firm to remove data on Canadian citizens, with one commissioner arguing that the system puts all Canadians “continually in a police lineup.”


ByteDance agreed to pay $92M in US privacy Settlement for TikTok data collection
2.3.2021 
Privacy  Securityaffairs

ByteDance, the company behind TikTok, agreed to pay $92 million in a settlement to U.S. users for illegal data collection.
ByteDance, the company behind TikTok, agreed to pay $92 million in a settlement to U.S. users. The settlement has yet to be approved by a federal judge. The Chinese firm was accused to have failed to get the users’ consent to collect data in compliance with the Illinois biometric privacy law.

Illinois is the only state that establishes penalties for any violation of data collection rules, it allows class actions against those companies that violate the law. Illinois, Texas, and Washington, are the only US state with specific laws that discipline the use of biometric data, but only the former doesn’t permit individual lawsuits.

“While we disagree with the assertions, rather than go through lengthy litigation, we’d like to focus our efforts on building a safe and joyful experience for the TikTok community,” reads an email statement issued by TikTok.

tiktok
Source: Messagero
In February 2020, Facebook agreed to settle for $550 million under the same law.


TikTok owner ByteDance to pay $92M in US privacy Settlement
27.2.2021 
Privacy  Securityweek

TikTok’s Chinese parent company ByteDance has agreed to pay $92 million in a settlement to U.S. users who are part of a class-action lawsuit alleging that the video-sharing app failed to get their consent to collect data in violation of a strict Illinois privacy law.

The federal lawsuit alleged that TikTok broke the Illinois biometric privacy law, which allows suits against companies that harvest consumer data without consent, including via facial and fingerprint scanning. Illinois is the only state with a law that allows people to seek monetary damages for such unauthorized data collection.

“While we disagree with the assertions, rather than go through lengthy litigation, we’d like to focus our efforts on building a safe and joyful experience for the TikTok community,” TikTok said in an emailed statement.

Facebook agreed to a $550 million settlement under the same law last February. The TikTok settlement must still be approved by a federal judge.

Privacy advocates have praised the law as the nation’s strongest form of protection in the commercial use of such data, and it has survived ongoing efforts by the tech industry and other businesses to weaken it.

Illinois is one of three states that have laws governing the use of biometric data. But the other two, Texas and Washington, don’t permit individual lawsuits, instead delegating enforcement to their attorneys general.


Privacy Faces Risks in Tech-Infused Post-Covid Workplace
23.2.2021
Privacy  Securityweek

People returning to work following the long pandemic will find an array of tech-infused gadgetry to improve workplace safety but which could pose risks for long-term personal and medical privacy.

Temperature checks, distance monitors, digital "passports," wellness surveys and robotic cleaning and disinfection systems are being deployed in many workplaces seeking to reopen.

Tech giants and startups are offering solutions which include computer vision detection of vital signs to wearables which can offer early indications of the onset of Covid-19 and apps that keep track of health metrics.

Salesforce and IBM have partnered on a "digital health pass" to let people share their vaccination and health status on their smartphone.

Clear, a tech startup known for airport screening, has created its own health pass which is being used by organizations such as the National Hockey League and MGM Resorts.

Fitbit, the wearable tech maker recently acquired by Google, has its own "Ready for Work" program that includes daily check-ins using data from its devices.

Fitbit is equipping some 1,000 NASA employees with wearables as part of a pilot program which requires a daily log-in using various health metrics which will be tracked by the space agency.

Microsoft and insurance giant United HealthCare have deployed a ProtectWell app which includes a daily symptom screener, and Amazon has deployed a "distance assistant" in its warehouses to help employees maintain safe distances.

And a large coalition of technology firms and health organizations are working on a digital vaccination certificate, which can be used on smartphones to show evidence of inoculation for Covid-19.

- 'Blurs the lines' -

With these systems, employees may face screenings even as they enter a building lobby, and monitoring in elevators, hallways and throughout the workplace.

The monitoring "blurs the line between people's workplace and personal lives," said Darrell West, a Brookings Institution vice president with the think tank's Center for Technology Innovation.

"It erodes longstanding medical privacy protections for many different workers."

A report last year by the consumer activist group Public Citizen identified at least 50 apps and technologies released during the pandemic "marketed as workplace surveillance tools to combat Covid-19."

The report said some systems go so far as identifying people who may not spend enough time in front of a sink to note inadequate hand-washing.

"The invasion of privacy that workers face is alarming, especially considering that the effectiveness of these technologies in mitigating the spread of Covid-19 has not yet been established," the report said.

The group said there should be clear rules on collection and storage of data, with better disclosure to employees.

- A delicate balance -

Employers face a delicate balance as they try to ensure workplace safety without intruding on privacy, said Forrest Briscoe, professor of management and organization at Penn State University.

Briscoe said there are legitimate reasons and precedents for requiring proof of vaccination. But these sometimes conflict with medical privacy regulations which limit a company's access to employee health data.

"You don't want the employer accessing that information for work-related decisions," Briscoe said.

Biscoe said many employers are relying on third-party tech vendors to handle the monitoring, but that has its risks as well.

"Using third-party vendors will keep the data separate," he said.

"But for some companies their business model involves gathering data and using it for some monetizable purpose and that poses a risk to privacy."

The global health crisis has inspired startups around the world to seek innovative ways to limit virus transmission, with some of those products shown at the 2021 Consumer Electronics Show.

Taiwan-based FaceHeart demonstrated software which can be installed in cameras for contactless measurement of vital signs to screen for shortness of breath, high fever, dehydration, elevated heart rate and other symptoms which are early indicators of Covid-19.

Drone maker Draganfly showcased camera technology which can be used to offer alerts on social distancing, and also detect changes in people's vital signs which may be early indicators of Covid-19 infection.

A programmable robot from Misty Robotics, also shown at CES, can be adapted as a health check monitor and can also be designed to disinfect frequently used surfaces like door handles, according to the company.

But there are risks in relying too much on technologies which may be unproven or inaccurate, such as trying to detect fevers with thermal cameras among moving people, said Jay Stanley, a privacy researcher and analyst with the American Civil Liberties Union.

"Employers have a legitimate interest in safeguarding workplaces and keeping employees healthy in the context of the pandemic," Stanley said.

"But what I would worry about is employers using the pandemic to pluck and store information in a systematic way beyond what is necessary to protect health."


Tougher EU Privacy Rules Loom for Messenger, Zoom
11.2.2021 
Privacy  Securityweek

Messaging apps such as Messenger or WhatsApp and video calls on Zoom face stricter privacy rules in Europe, after a draft law passed a key EU hurdle on Wednesday.

The EU's 27 member states approved a proposal that was stuck since 2017, with countries split between those wanting strict privacy online and others wanting to give leeway to law enforcement and advertisers.

Portugal, which currently holds the EU's rotating presidency, submitted a compromise proposal that was approved by qualified majority at a meeting in Brussels.

"The path to the council position has not been easy," Portugal's minister of infrastructure Pedro Nuno Santos said.

"But we now have a mandate that strikes a good balance between solid protection of the private life of individuals and fostering the development of new technologies and innovation."

France, which wants to give its police forces stronger tools to fight terrorism, wants to limit the law's curbs on access to private data.

The fight against child pornography was also a major concern of many member states.

But Germany supported far more robust privacy rules, with fewer exceptions.

In the approved text, member states agreed that service providers are allowed "to safeguard the prevention, investigation, detection or prosecution of criminal offences".

In addition, companies such as Facebook and Google, can continue to process metadata of their users, but only with consent and if the information is made anonymous.

The final text also lent support to the advertising industry and abandoned a plan to ban so-called cookies that closely track user activity online.

The proposal updates existing EU rules that date back to 2002, under which strict privacy protection is only applied to text messages and voice calls provided by traditional telecoms, sparing tech giants.

Portugal will now negotiate with the European Parliament on a final version of the plan, that would then need ratification by MEPs and the 27 member states.

But the lead parliament's rapporteur overseeing the negotiation warned that the talks would be rigorous.

"It is to be feared that the industry's attempts to undermine the directive over the past years have borne fruit -- they've had enough time to do that," Birgit Sippel, a German MEP from the centre left S&D group, said.

"We must now analyse in detail whether the proposals of the member states really contribute to better protecting the private communication of users online, or instead primarily serve the business models of some digital corporations."


Google Moves Away From Diet of 'Cookies' to Track Users
8.2.2021 
Privacy  Securityweek

Google is weaning itself off user-tracking "cookies" which allow the web giant to deliver personalized ads but which also have raised the hackles of privacy defenders.

Last month, Google unveiled the results of tests showing an alternative to the longstanding tracking practice, claiming it could improve online privacy while still enabling advertisers to serve up relevant messages.

"This approach effectively hides individuals 'in the crowd' and uses on-device processing to keep a person’s web history private on the browser," Google product manager Chetna Bindra explained in unveiling the system called Federated Learning of Cohorts (FLoC).

"Results indicate that when it comes to generating interest-based audiences, FLoC can provide an effective replacement signal for third-party cookies."

Google plans to begin testing the FLoC approach with advertisers later this year with its Chrome browser.

"Advertising is essential to keeping the web open for everyone, but the web ecosystem is at risk if privacy practices do not keep up with changing expectations," Bindra added.

Google has plenty of incentive for the change. The US internet giant has been hammered by critics over user privacy, and is keenly aware of trends for legislation protecting people's data rights.

Growing fear of cookie-tracking has prompted support for internet rights legislation such as GDPR in Europe and has the internet giant devising a way to effectively target ads without knowing too much about any individual person.

- 'Privacy nightmare' -

Some kinds of cookies -- which are text files stored when a user visits a website -- are a convenience for logins and browsing at frequently visited sites.

Anyone who has pulled up a registration page online only to have their name and address automatically entered where required has cookies to thank. But other kinds of cookies are seen by some as nefarious.

"Third-party cookies are a privacy nightmare," Electronic Frontier Foundation staff technologist Bennet Cyphers told AFP.

"You don't need to know what everyone has ever done just to serve them an ad."

He reasoned that advertising based on context can be effective; an example being someone looking at recipes at a cooking website being shown ads for cookware or grocery stores.

Safari and Firefox browsers have already done away with third-party cookies, but they are still used at the world's most popular browser - Chrome.

Chrome accounted for 63 percent of the global browser market last year, according to StatCounter.

"It's both a competitive and legal liability for Google to keep using third-party cookies, but they want their ad business to keep humming," Cyphers said.

Cyphers and others have worries about Google using a secret formula to lump internet users into groups and give them "cohort" badges of sorts that will be used to target marketing messages without knowing exactly who they are.

"There is a chance that it just makes a lot of privacy problems worse," Cyphers said, suggesting the new system could create "cohort" badges of people who may be targeted with little transparency..

"There is a machine learning black box that is going to take in every bit of everything you have even done in your browser and spit out a label that says you are this kind of person," Cyphers said.

"Advertisers are going to decode what those labels mean."

He expected advertisers to eventually deduce which labels include certain ages, genders or races, and which are people prone to extreme political views.

A Marketers for an Open Web business coalition is campaigning against Google's cohort move, questioning its effectiveness and arguing it will force more advertisers into its "walled garden."

"Google’s proposals are bad for independent media owners, bad for independent advertising technology and bad for marketers," coalition director James Rosewell said in a release.


Clearview Facial-Recognition Technology Ruled Illegal in Canada

5.2.2021  Privacy  Threatpost

The company’s controversial practice of collecting and selling billions of faceprints was dealt a heavy blow by the Privacy Commissioner that could set a precedent in other legal challenges.

Canadian authorities have found that the collection of facial-recognition data by Clearview AI is illegal because it violates federal and provincial privacy laws, representing a win for individuals’ privacy and potentially setting a precedent for other legal challenges to the controversial technology.

A joint investigation of privacy authorities led by the Office of the Privacy Commissioner of Canada came to this conclusion Wednesday, claiming that the New York-based company’s scraping of billions of images of people from across the Internet represented mass surveillance and infringes on the privacy rights of Canadians, according to a release the Office posted online.

Moreover, the investigation found that Clearview had collected highly sensitive biometric information without people’s knowledge or consent, and then used and disclosed this personal information for inappropriate purposes that would not be appropriate even if people had consented.
“It is completely unacceptable for millions of people who will never be implicated in any crime to find themselves continually in a police lineup,” Canada’s Privacy Commissioner Daniel Therrien said in a statement. “Yet the company continues to claim its purposes were appropriate, citing the requirement under federal privacy law that its business needs be balanced against privacy rights.”

Clearview, founded in 2017 by Australian entrepreneur Hoan Ton-Thatand, is in the business of collecting what it calls “faceprints,” which are unique biometric identifiers similar to someone’s fingerprint or DNA profile, from photos people post online.

Since 2019, the company has come under considerable fire and faced legal challenges to its technology and business practices, part of a larger question of whether facial-recognition technologies being developed by myriad companies—including heavy hitters like Microsoft and IBM—should be legal at all.

Clearview to date has amassed a database of billions of these faceprints, which it sells to its clients. It also provides access to a smartphone app that allows clients to upload a photo of an unknown person and instantly receive a set of matching photos.

One of the biggest arguments in his company’s defense that Ton-Thatand has made in published reports is that there is significant benefit in using its technology in law enforcement and national security, which outweighs privacy concerns of individuals. Moreover, Clearview is not to blame if law enforcement misuses its technology.

The company made these same arguments in its case to the Canadian Privacy Commissioner, which the investigation shot down. Authorities also did not buy Clearview’s defense that privacy laws do not apply to its activities because the company has no connection to Canada, and that no consent was required because the photos were publicly available on websites.

The Commission d’accès à l’information du Québec, the Office of the Information and Privacy Commissioner for British Columbia and the Office of the Information and Privacy Commissioner of Alberta also took part in the investigation.

The decision in Canada likely will lend heft to other legal challenges not only to Clearview’s technology but facial recognition in general. Last May the American Civil Liberties Union sued Clearview for privacy violations in Illinois, a case that is ongoing. Lawmakers in the United States even have proposed a nationwide ban on facial recognition.

The technology also raises questions of racial bias and the potential for false accusations against innocent people. In December two Black man filed suit against police in Michigan, saying they were falsely IDed by facial-recognition technology—specifically, DataWorks Plus facial recognition software in use by Michigan State Police.

All of this public scrutiny and legal pressure is inspiring law enforcement to change course on finding merit in using facial recognition in their activities. In November, the Los Angeles Police Department banned the Clearview AI facial recognition platform after personnel were revealed to have been using the database, citing privacy concerns and under pressure from the ACLU and other groups.


Canada Probe Concludes Clearview AI Breached Privacy Laws
5.2.2021 
Privacy  Securityweek

US facial recognition technology firm Clearview AI illegally conducted mass surveillance in breach of Canadians' privacy rights, Canada's privacy commissioner said Wednesday following an investigation.

"What Clearview does is mass surveillance and it is illegal," Privacy Commissioner Daniel Therrien told a teleconference.

An investigation by the watchdog found the New York-based firm, whose technology allows law enforcement and others to match photographs of unknown people against its databank of more than 3 billion images, had violated Canadian privacy laws.

It found that Clearview AI had collected highly sensitive biometric data scraped from websites and social media platforms without users' knowledge or consent, and disclosed personal information "for inappropriate purposes," creating risks of significant harm to individuals.

Police forces, including the Royal Canadian Mounted Police, and other organizations across Canada had created 48 accounts with the company.

The privacy commissioner recommended that Clearview AI stop offering its facial recognition services to Canadian clients, stop collecting images of people in Canada and delete those already in its database.

The company pulled out of the Canadian market in 2020, but rejected the other guidance.

"In disagreeing with our findings, Clearview alleged an absence of harms to individuals flowing from its activities," said the report.

Company founder Hoan Ton-That has said the technology has been made available to more than 600 law enforcement agencies globally, raising concerns about police surveillance.

Social media sites like Twitter, Facebook, YouTube (Google) and LinkedIn (Microsoft) have protested against the unsanctioned use of their users' photos, but Clearview has reportedly declined to delete them.

Officials in Britain and Australia have launched similar investigations of the company's practices, which is also the subject of a complaint in France.


Privacy predictions for 2021
29.1.2021 
Privacy  Securelist
2020 saw an unprecedented increase in the importance and value of digital services and infrastructure. From the rise of remote working and the global shift in consumer habits to huge profits booked by internet entertainers, we are witnessing how overwhelmingly important the connected infrastructure has become for the daily functioning of society.

What does all this mean for privacy? With privacy more often than not being traded for convenience, we believe that for many 2020 has fundamentally changed how much privacy people are willing to sacrifice in exchange for security (especially from the COVID-19 threat) and access to digital services. How are governments and enterprises going to react to this in 2021? Here are some of our thoughts on what the coming year may look like from the privacy perspective, and which diverse and sometimes contrary forces are going to shape it.

Smart health device vendors are going to collect increasingly diverse data – and use it in increasingly diverse ways.
Heart rate monitors and step counters are already a standard in even the cheapest smart fitness band models. More wearables, however, now come with an oximeter and even an ECG, allowing you to detect possible heart rate issues before they can even cause you any trouble. We think more sensors are on the way, with body temperature among the most likely candidates. And with your body temperature being an actual public health concern nowadays, how long before health officials want to tap into this pool of data? Remember, heart rate and activity tracker data – as well as consumer gene sequencing – has already been used as evidence in a court of law. Add in more smart health devices, such as smart body scales, glucose level monitors, blood pressure monitors and even toothbrushes and you have huge amounts of data that is invaluable for marketers and insurers.

Consumer privacy is going to be a value proposition, and in most cases cost money.
Public awareness of the perils of unfettered data collection is growing, and the free market is taking notice. Apple has publicly clashed with Facebook claiming it has to protect its users’ privacy, while the latter is wrestling with regulators to implement end-to-end encryption in its messaging apps. People are more and more willing to choose services that have at least a promise of privacy, and even pay for them. Security vendors are promoting privacy awareness, backing it with privacy-oriented products; incumbent privacy-oriented services like DuckDuckGo show they can have a sustainable business model while leaving you in control of your data; and startups like You.com claim you can have a Google-like experience without the Google-like tracking.
Governments are going to be increasingly jealous of big-tech data hoarding – and increasingly active in regulation.
The data that the big tech companies have on people is a gold mine for governments, democratic and oppressive alike. It can be used in a variety of ways, from using geodata to build more efficient transportation to sifting through cloud photos to fight child abuse and peeking into private conversations to silence dissent. However, private companies are not really keen on sharing it. We have already seen governments around the world oppose companies’ plans to end-to-end encrypt messaging and cloud backups, pass legislation forcing developers to plant backdoors into their software, or voice concerns with DNS-over-HTTPS, as well as more laws regulating cryptocurrency being enacted everywhere, and so on and so forth. But big tech is called big for a reason, and it will be interesting to see how this confrontation develops.
Data companies are going to find ever more creative, and sometimes more intrusive, sources of data to fuel the behavioral analytics machine.
Some sources of behavioral analytics data are so common we can call them conventional, such as using your recent purchases to recommend new goods or using your income and spending data to calculate credit default risk. But what about using data from your web camera to track your engagement in work meetings and decide on your yearly bonus? Using online tests that you take on social media to determine what kind of ad will make you buy a coffee brewer? The mood of your music playlist to choose the goods to market to you? How often you charge your phone to determine your credit score? We have already seen these scenarios in the wild, but we are expecting the marketers to get even more creative with what some data experts call AI snake oil. The main implication of this is the chilling effect of people having to weigh every move before acting. Imagine knowing that choosing your Cyberpunk 2077 hero’s gender, romance line and play style (stealth or open assault) will somehow influence some unknown factor in your real life down the line. And would it change how you play the game?
Multi-party computations, differential privacy and federated learning are going to become more widely adopted – as well as edge computing.
It is not all bad news. As companies become more conscious as to what data they actually need and consumers push back against unchecked data collection, more advanced privacy tools are emerging and becoming more widely adopted. From the hardware perspective, we will see more powerful smartphones and more specialized data processing hardware, like Google Coral, Nvidia Jetson, Intel NCS enter the market at affordable prices. This will allow developers to create tools that are capable of doing fancy data processing, such as running neural networks, on-device instead of the cloud, dramatically limiting the amount of data that is transferred from you to the company. From the software standpoint, more companies like Apple, Google and Microsoft are adopting differential privacy techniques to give people strict (in the mathematical sense) privacy guarantees while continuing to make use of data. Federated learning is going to become the go-to method for dealing with data deemed too private for users to share and for companies to store. With more educational and non-commercial initiatives, such as OpenMined, surrounding them, these methods might lead to groundbreaking collaborations and new results in privacy-heavy areas such as healthcare.
We have seen over the last decade, and the last few years in particular, how privacy has become a hot-button issue at the intersection of governmental, corporate and personal interests, and how it has given rise to such different and sometimes even conflicting trends. In more general terms, we hope this year helps us, as a society, to move closer to a balance where the use of data by governments and companies is based on privacy guarantees and respect of individual rights.


Firefox Cracks Down on Supercookies to Improve User Privacy
27.1.2021 
Privacy  Securityweek

Mozilla this week announced further improvements to user privacy in Firefox, through the isolation of network connections and caches, thus essentially cracking down on supercookies.

Used instead of ordinary cookies, supercookies collect information about users’ Internet browsing habits, are difficult to detect and block, and are often abused to follow users around the web. Trackers may store supercookies in Flash storage, ETags, and HSTS flags, to make them difficult to remove.

For years, browser makers have been looking for ways to improve user privacy, and Mozilla now says it has found a solution to ensure that users won’t be easily tracked cross-site: isolation.

Specifically, Firefox 85 is arriving with an updated network architecture, where network connections and caches are isolated to the website being visited.

“Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking,” Mozilla says.

Firefox 85, Mozilla argues, should make cache-based supercookies largely useless, as it aims to prevent trackers from using these supercookies across websites.

Firefox relies on cache to reduce overhead, sharing some internal resources between websites, such as images, and reusing a single network connection for the loading of resources that come from the same party, even if they are embedded on multiple websites.

Trackers abuse these shared resources to create supercookies, through identifiers encoded in cached images, which are then retrieved on all websites on which the same images are embedded.

“To prevent this possibility, Firefox 85 uses a different image cache for every website a user visits. That means we still load cached images when a user revisits the same site, but we don’t share those caches across sites,” Mozilla says.

To prevent trackers from abusing caches to create supercookies, Firefox 85 isolates a range of caches by the top-level site: Alt-Svc cache, DNS cache, font cache, favicon cache, HSTS cache, HTTP Authentication cache, HTTP cache, image cache, OCSP cache, style sheet cache, and TLS certificate cache.

Furthermore, the browser aims to prevent connection-based tracking through partitioning preconnect, prefetch, pooled, and speculative connections, along with TLS session identifiers.

“This partitioning applies to all third-party resources embedded on a website, regardless of whether Firefox considers that resource to have loaded from a tracking domain,” Mozilla explains, adding that the changes will have a very low impact on page load time.


German laptop retailer fined €10.4m under GDPR for video-monitoring employees
19.1.2021 
Privacy  Securityaffairs

German data regulator LfD announced a €10.4M fine under GDPR against the online laptop and electronic goods retailer NBB for video-monitoring employees.
The State Commissioner for Data Protection (LfD) Lower Saxony announced a €10.4 million fine under the GDPR against an online laptop and electronic goods retailer NBB’s (notebooksbilliger.de) for video-monitoring employees for at least a couple of years. This fine is the highest the German authority has set so far.

“The State Commissioner for Data Protection (LfD) Lower Saxony has imposed a fine of 10.4 million euros on notebooksbilliger.de AG.” states the LfD. “The company had video-monitored its employees for at least two years without any legal basis.” states the LfD.”The illegal cameras recorded workplaces, sales rooms, warehouses and common areas, among other things.”

“The State Commissioner for Data Protection (LfD) Lower Saxony said NBB’s (notebooksbilliger.de) constant surveillance was “inadmissible” under the General Data Protection Regulation (GDPR).” reported ComplianceWeek.

NBB was disappointed by the decision and defined the fine as “inadmissible,” it claimed that the video cameras were installed to prevent and investigate criminal offenses and to track the flow of goods in the warehouses.

“The fine is completely disproportionate. It bears no relation to the size and financial strength of the company or to the seriousness of the alleged violation,” CEO Oliver Hellmold said (original statement, translated statement), “We consider the decision to be unlawful and demand that it be repealed.”

The data regulator pointed out that to prevent theft, a company must first put in place minor measures, such as random bag checks. Video surveillance to uncover criminal offenses is only lawful if there is justified suspicion against certain employees. In any case, the companies can use the camera to monitor the suspects for a limited period of time. This is not the case of the NBB because the video surveillance was in place for a long time, and the recordings were saved for 60 days in many cases, which is significantly longer than necessary.

“We are dealing with a serious case of video surveillance in the company,” said Barbara Thiel, head for LfD Lower Saxony. “Companies must understand that with such intensive video surveillance they are massively violating the rights of their employees.”

The LfD remarks that permanent and intensive video surveillance violates the rights of the employee and put them under pressure.

“Customers of notebooksbilliger.de were also affected by the inadmissible video surveillance, as some cameras were aimed at seating in the sales area. In areas in which people typically stay longer, for example to extensively test the devices offered, those affected by data protection law have high interests worthy of protection.” continues the German data authority. “This is especially true for seating areas that are obviously intended to invite you to linger for a longer period of time. Therefore, the video surveillance by notebooksbilliger.de was not proportionate in these cases.”

The German privacy watchdog also fined the clothing retailer H&M €35.3 million because it was allegedly spying on its customer service representatives in Germany.


EU Court Opinion Leaves Facebook More Exposed Over Privacy
15.1.2021 
Privacy  Securityweek

Any EU country can take legal action against companies like Facebook over cross-border violations of data privacy rules, not just the main regulator in charge of the company, a top court adviser said Wednesday.

The preliminary opinion is part of a long-running legal battle between Facebook and Belgium’s data protection authority over the company’s use of cookies to track the behavior of internet users, even those who weren’t members of the social network.

The advice from the European Court of Justice’s Advocate General Michal Bobek potentially paves the way for an onslaught of fresh data privacy cases across the EU, experts said.

The opinion, which is often followed by the court, comes ahead of a formal decision by the ECJ’s judges expected later this year.

Facebook argues that the Belgian watchdog, which launched the case in 2015, no longer has jurisdiction after the EU’s strict General Data Protection Regulation took effect in 2018. The company says that under GDPR, only one national data protection authority has the power to handle legal cases involving cross-border data complaints - a system known as “one-stop shop.” In Facebook’s case, it’s the Data Protection Commission in Ireland, where the company’s European headquarters is based.

“The lead data protection authority cannot be deemed as the sole enforcer of the GDPR in cross-border situations, and must, in compliance with the relevant rules and time limits provided for by the GDPR, closely cooperate with the other data protection authorities concerned,” the opinion said.

Facebook interpreted it as a victory.

“We are pleased that the Advocate General has reaffirmed the value and principles of the one-stop-shop mechanism, which was introduced to ensure the efficient and consistent application of GDPR,” said Associate General Counsel Jack Gilbert. “We await the Court’s final verdict.”

Privacy advocates and experts, however, said the advice could change how data privacy cases are handled, by taking the pressure off a single watchdog.

Johnny Ryan, a senior fellow at the Irish Council for Civil Liberties, said Bobek is signalling that Ireland’s privacy watchdog “can no longer use its status as lead authority for Google, Facebook, etc. to hold up enforcement of the GDPR across the EU.”

The Irish watchdog has faced criticism for not dealing quickly enough with a rising pile of cross-border data privacy cases involving big tech companies since GDPR took effect. It issued its first such penalty to Twitter last month, fining it for a security breach, but still has about two dozen more to go.

Businesses could also face a bigger compliance burden responding to more privacy cases in multiple EU markets, because it would be easier for people to file complaints to their local privacy watchdog, said Cillian Kieran, CEO of privacy compliance startup Ethyca.


Tech Giants Hope for US Data Privacy Law
14.1.2021 
Privacy  Securityweek

Google, Twitter and Amazon are hopeful that Joe Biden's incoming administration in the United States will enact a federal digital data law, senior company officials said at CES, the annual electronics and technology show.

"I think the stars are better aligned than ever in the past," Keith Enright, Google's chief data privacy office, told a discussion Tuesday on trust and privacy.

The European Union's General Data Protection Regulation (GDPR), which has applied since May 2018, has largely contributed to making consumers aware of the issues related to the data that they submit to large digital platforms on a daily basis.

This European data rights charter influenced California, which has now had the California Consumer Privacy Act (CCPA) for over a year.

"That tends to dramatically increase the chances that we can develop the political will at the federal level to do something, just to create a uniform rule of law so that companies know what the rules of the road are and individual users know what their rights and protections are," Enright said.

Biden's government will have leeway to legislate, as the Democrats will be in control of the House of Representatives and the Senate.

The incoming president will benefit from the experience of his deputy Kamala Harris, a former prosecutor in California, where the majority of the tech giants are located.

"There are more than 100 national data privacy laws in the world," said Anne Toth, director of Amazon's Alexa Trust. "We're dealing with a forever patchwork quilt but we're trying to minimize the differences."

"The laws must be interoperable," added Damien Kieran, director of data privacy at Twitter.

"The federal government as it thinks about this, has to really understand the international future of this," he continued.

"If we get this wrong, not to put too much weight on it, but I think it is this important, you increase the chances for that balkanization of things."

Silicon Valley has long been close to elected Democrats, but the relationship has deteriorated since the election of Donald Trump in 2016 and the scandal of Cambridge Analytica, a British firm that hijacked the personal data of tens of millions of Facebook users for political propaganda purposes.

Google is the subject of an anti-trust lawsuit by the Department of Justice and a coalition of American states. Its YouTube platform, like Facebook and Twitter, is in the crosshairs of government officials for their management of personal information.


New Resources Define Cloud Security and Privacy Responsibilities
13.1.2021 
Privacy  Securityweek

Data protection and compliance solutions provider HITRUST has announced the release of new Shared Responsibility Matrices for Amazon Web Services (AWS) and Microsoft Azure.

Best known for the HITRUST CSF (Common Security Framework), the Texas-based company has worked with healthcare, technology and information security organizations to help organizations safeguard sensitive information and manage information risk.

Meant to define the security and privacy responsibilities that both cloud service providers and customers have, each HITRUST Shared Responsibility Matrix is specifically tailored for the cloud service provider’s unique solution offering and should help streamline processes for risk management programs.

Shared responsibility models mean that the cloud provider is responsible for the security of hosting applications and systems, while the customers assume responsibility for other apps and systems, but such models are loosely defined, and variations based on solution represent a challenge.

Thus, organizations looking to deploy solutions in the cloud face additional complexity when seeking to achieve risk management objectives, HITRUST notes.

HITRUST says that the freely available Shared Responsibility Matrices, which it developed in collaboration with Microsoft and AWS, are meant to “address the many misunderstandings, risks, and complexities involved when organizations leverage cloud service providers.”

Each HITRUST Shared Responsibility Matrix is built based on the HITRUST CSF framework that integrates over 40 authoritative sources. The framework includes more than 2,000 controls (activities to mitigate risks), and the Matrix shows if they are the responsibility of cloud service providers or their customers.

“HITRUST launched this Program with the goal of providing greater clarity regarding the ownership and operation of security controls between organizations and their cloud service providers,” said Becky Swain, director of standards development at HITRUST.

The HITRUST Shared Responsibility Matrices for AWS and Microsoft Azure are available in the form of spreadsheets.