Security  2024  2023  2022  2021  2020


Google Created 'Open Source Maintenance Crew' to Help Secure Critical Projects
14.5.22  Security  
Thehackernews

Google on Thursday announced the creation of a new "Open Source Maintenance Crew" to focus on bolstering the security of critical open source projects.

Additionally, the tech giant pointed out Open Source Insights as a tool for analyzing packages and their dependency graphs, using it to determine "whether a vulnerability in a dependency might affect your code."

"With this information, developers can understand how their software is put together and the consequences to changes in their dependencies," the company said.

The development comes as security and trust in the open source software ecosystem has been increasingly thrown into question in the aftermath of a string of supply chain attacks designed to compromise developer workflows.

In December 2021, a critical flaw in the ubiquitous open source Log4j logging library left several companies scrambling to patch their systems against potential abuse.

The announcement also comes less than two weeks after the Open Source Security Foundation (OpenSSF) announced what's called the Package Analysis project to carry out dynamic analysis of all packages uploaded to popular open source repositories.


E.U. Proposes New Rules for Tech Companies to Combat Online Child Sexual Abuse
14.5.22  Security  
Thehackernews
The European Commission on Wednesday proposed new regulation that would require tech companies to scan for child sexual abuse material (CSAM) and grooming behavior, raising worries that it could undermine end-to-end encryption (E2EE).

To that end, online service providers, including hosting services and communication apps, are expected to proactively scan their platforms for CSAM as well as report, remove and disable access to such illicit content.

While instant messaging services like WhatsApp already rely on hashed versions of known CSAM to automatically block new uploads of images or videos matching them, the new plan requires such platforms to identify and flag new instances of CSAM.

"Detection technologies must only be used for the purpose of detecting child sexual abuse," the regulator said. "Providers will have to deploy technologies that are the least privacy-intrusive in accordance with the state of the art in the industry, and that limit the error rate of false positives to the maximum extent possible."

A new EU Centre on Child Sexual Abuse, which will be independently established to enforce the measures, has been tasked with maintaining a database of digital "indicators" of child sexual abuse, in addition to processing and forwarding legitimate reports for law enforcement action.

In addition, the rules require app stores to ensure that children are refrained from downloading apps that "may expose them to a high risk of solicitation of children."

The controversial proposal to clamp down on sexual abuse material comes days after a draft version of the regulation leaked earlier this week, prompting Johns Hopkins University security researcher Matthew Green to state that "This is Apple all over again."

The tech giant, which last year announced plans to scan and detect known CSAM on its devices, has since delayed the rollout to "take additional time over the coming months to collect input and make improvements."

Meta, likewise, has postponed its plans to support E2EE across all its messaging services, WhatsApp, Messenger, and Instagram, until sometime in 2023, stating that it's taking the time to "get this right."

A primary privacy and security concern arising out of scanning devices for illegal pictures of sexual abuse is that the technology could weaken privacy by creating backdoors to defeat E2EE protections and facilitate large-scale surveillance.

This would also necessitate persistent plain-text access to users' private messages, effectively rendering E2EE incompatible and eroding the security and confidentiality of the communications.

"The idea that all the hundreds of millions of people in the E.U. would have their intimate private communications, where they have a reasonable expectation that that is private, to instead be kind of indiscriminately and generally scanned 24/7 is unprecedented," Ella Jakubowska, a policy advisor at European Digital Rights (EDRi), told Politico.

But the privacy afforded by encryption is also proving to be a double-edged sword, with governments increasingly fighting back over worries that encrypted platforms are being misused by malicious actors for terrorism, cybercrime, and child abuse.

"Encryption is an important tool for the protection of cybersecurity and confidentiality of communications," the commission said. "At the same time, its use as a secure channel could be abused of by criminals to hide their actions, thereby impeding efforts to bring perpetrators of child sexual abuse to justice."

The development underscores Big Tech's ongoing struggles to balance privacy and security while also simultaneously addressing the need to assist law enforcement agencies in their quest for accessing criminal data.

"The new proposal is over-broad, not proportionate, and hurts everyone's privacy and safety," the Electronic Frontier Foundation (EFF) said. "The scanning requirements are subject to safeguards, but they aren't strong enough to prevent the privacy-intrusive actions that platforms will be required to undertake."


GitHub Notifies Victims Whose Private Data Was Accessed Using OAuth Tokens
21.4.22  Security  
Thehackernews
GitHub on Monday noted that it had notified all victims of an attack campaign, which involved an unauthorized party downloading private repository contents by taking advantage of third-party OAuth user tokens maintained by Heroku and Travis CI.

"Customers should also continue to monitor Heroku and Travis CI for updates on their own investigations into the affected OAuth applications," the company said in an updated post.

The incident originally came to light on April 12 when GitHub uncovered signs that a malicious actor had leveraged the stolen OAuth user tokens issued to Heroku and Travis-CI to download data from dozens of organizations, including NPM.

The Microsoft-owned platform also said that it will alert customers promptly should the ongoing investigation identify additional victims. Additionally, it cautioned that the adversary may also be digging into the repositories for secrets that could be used in other attacks.

Heroku, which has pulled support for GitHub integration in the wake of the incident, recommended that users have the option of integrating their app deployments with Git or other version control providers such as GitLab or Bitbucket.

Hosted continuous integration service provider Travis CI, in a similar advisory published on Monday, stated that it had "revoked all authorization keys and tokens preventing any further access to our systems."

Stating that no customer data was exposed, the company acknowledged that the attackers breached a Heroku service and accessed a private application's OAuth key that's used to integrate both the Heroku and Travis CI apps.

But Travis CI reiterated that it found no evidence of intrusion into a private customer repository or that the threat actors obtained unwarranted source code access.

"Given the data we had and out of an abundance of caution, Travis CI revoked and reissued all private customer auth keys and tokens integrating Travis CI with GitHub to ensure no customer data is compromised," the company said.


Google Drops FLoC and Introduces Topics API to Replace Tracking Cookies for Ads
26.1.2022
Security Thehackernews

Google on Tuesday announced that it is abandoning its controversial plans for replacing third-party cookies in favor of a new Privacy Sandbox proposal called Topics, which categorizes users' browsing habits into approximately 350 topics.

Thee new framework, which takes the place of FLoC (short for Federated Learning of Cohorts), slots users' browsing history for a given week into a handful of top pre-designated interests (i.e., topics), which are retained only on the device for a revolving period of three weeks.

Subsequently, when a user visits a participating site, the Topics selects three of the interests — one topic from each of the past three weeks — to share with the site and its advertising partners. To give more control over the framework, users can not only see the topics but also remove topics or disable it altogether.

By labeling each website with a recognizable, high-level topic and sharing the most frequent topics associated with the browsing history, the idea is to facilitate interest-based advertising by showing users more relevant ads, without needing to know the specific sites that have been visited.

Topics, which is expected to be launched as a developer trial in Chrome browser, employs machine learning to infer topics from hostnames and is designed to exclude sensitive categories, such as sexual orientation, religion, gender, or race, Google pointed out.

"Because Topics is powered by the browser, it provides you with a more recognizable way to see and control how your data is shared, compared to tracking mechanisms like third-party cookies," Vinay Goel, privacy director of Privacy Sandbox, said.

"And, by providing websites with your topics of interest, online businesses have an option that doesn't involve covert tracking techniques, like browser fingerprinting, in order to continue serving relevant ads," Goel added.

The development comes exactly seven months after Google said it was delaying the rollout from early 2022 to late 2023 in June 2021 following blowback from privacy advocates, prompting the company to acknowledge that "more time is needed across the ecosystem to get this right."

Topics also hopes to rework some of the core concerns with FLoC, which was branded by the Electronic Frontier Foundation (EFF) as a terrible idea that created more privacy risks for users.

Particularly, FLoC drew criticism for constructing "cohorts" from a combination of different online interests that could lead to classifying users in a manner that could increase the risk of discrimination. What's more, should a cohort be deemed too small, then it could be combined with other tracking information to identify an individual, effectively undermining the privacy protections uniquely.

The overhaul is part of the search giant's plans to replace third-party cookies over privacy concerns. Privacy Sandbox, as the efforts are called, aim to develop privacy-focused alternatives that restrict tracking of users on the web while also maintaining existing web capabilities, including advertising.