Social  2024  2023  2022  2021  2020


Meta Sues Hackers Behind Facebook, WhatsApp and Instagram Phishing Attacks
27.12.2021
Social Thehackernews
Facebook Whatsapp Instagram Phishing Attacks
Facebook's parent company Meta Platforms on Monday said it has filed a federal lawsuit in the U.S. state of California against bad actors who operated more than 39,000 phishing websites that impersonated its digital properties to mislead unsuspecting users into divulging their login credentials.

The social engineering scheme involved the creation of rogue webpages that masqueraded as the login pages of Facebook, Messenger, Instagram, and WhatsApp, on which victims were prompted to enter their usernames and passwords that were then harvested by the defendants. The tech giant is also seeking $500,000 from the anonymous actors.

The attacks were carried out using a relay service, Ngrok, that redirected internet traffic to the phishing websites in a manner that concealed the true location of the fraudulent infrastructure. Meta said the volume of these phishing attacks ramped up in volume since March 2021 and that it worked with the relay service to suspend thousands of URLs to the phishing websites.

Facebook Whatsapp Instagram Phishing Attacks
"This lawsuit is one more step in our ongoing efforts to protect people's safety and privacy, send a clear message to those trying to abuse our platform, and increase accountability of those who abuse technology," Jessica Romero, Meta's director of platform enforcement and litigation, said in a statement.

The litigation comes days after the social technology company announced it took steps to disrupt the activities of seven surveillance-for-hire outfits that created over 1,500 fake accounts on Facebook and Instagram to target 50,000 users located in over 100 countries. Last month, Meta said it had banned four malicious cyber groups for targeting journalists, humanitarian organizations, and anti-regime military forces in Afghanistan and Syria.


Facebook Bans 7 'Cyber Mercenary' Companies for Spying on 50,000 Users
20.12.2021
Social Thehackernews
Meta Platforms on Thursday revealed it took steps to deplatform seven cyber mercenaries that it said carried out "indiscriminate" targeting of journalists, dissidents, critics of authoritarian regimes, families of opposition, and human rights activists located in over 100 countries, amid mounting scrutiny of surveillance technologies.

To that end, the company said it alerted 50,000 users of Facebook and Instagram that their accounts were spied on by the companies, who offer a variety of services that run the spyware gamut from hacking tools for infiltrating mobile phones to creating fake social media accounts to monitor targets. It also removed 1,500 Facebook and Instagram accounts linked to these firms.

"The global surveillance-for-hire industry targets people across the internet to collect intelligence, manipulate them into revealing information and compromise their devices and accounts," Meta's David Agranovich and Mike Dvilyanski said. "These companies are part of a sprawling industry that provides intrusive software tools and surveillance services indiscriminately to any customer."

Four of the cyber mercenary enterprises — Cobwebs Technologies, Cognyte, Black Cube, and Bluehawk CI — are based in Israel. Also included in the list is an Indian company known as BellTroX, a North Macedonian named Cytrox, and an unknown entity operating out of China that's believed to have conducted surveillance campaigns focused on minority groups in the Asia-Pacific region.

The social media giant said it observed these commercial players engaging in reconnaissance, engagement, and exploitation activities to further their surveillance objectives. The companies operated a vast network of tools and fictitious personas to profile their targets, establish contact using social engineering tactics and, ultimately, deliver malicious software through phishing campaigns and other techniques that allowed them to access or take control of the devices.

Citizen Lab, in an independent report, disclosed that two Egyptians living in exile had their iPhones compromised in June 2021 using a new spyware called Predator built by Cytrox. In both instances, the hacks were facilitated by sending single-click links to the targets via WhatsApp, with the links sent as images containing URLs.

Cyber Mercenary
While the iOS variant of Predator works by running a malicious shortcut automation retrieved from a remote server, the Android samples unearthed by Citizen Lab features capabilities to record audio conversations and fetch additional payloads from a remote attacker-controlled domain. The Apple devices were running iOS 14.6, the then-latest version of the mobile operating system at the time of the hacks, suggesting the weaponization of a never-before-seen exploit to target the iPhones. It's not immediately clear if the company has already fixed the vulnerability.

"The targeting of a single individual with both Pegasus and Predator underscores that the practice of hacking civil society transcends any specific mercenary spyware company," Citizen Lab researchers said. "Instead, it is a pattern that we expect will persist as long as autocratic governments are able to obtain sophisticated hacking technology. Absent international and domestic regulations and safeguards, journalists, human rights defenders, and opposition groups will continue to be hacked into the foreseeable future."

In a related development, the U.S. Treasury Department added eight more Chinese companies — drone maker DJI Technology, Megvii, and Yitu Limited, among others — to an investment blocklist for "actively cooperating with the [Chinese] government's efforts to repress members of ethnic and religious minority groups," including Muslim minorities in the Xinjiang province.

Meta's sweeping crackdown also comes close on the heels of a detailed technical analysis of FORCEDENTRY, the now-patched zero-click iMessage exploit put to use by the embattled Israeli company NSO Group to surveil journalists, activists, and dissidents around the world.

Google Project Zero (GPZ) researchers Ian Beer and Samuel Groß called it "one of the most technically sophisticated exploits" that uses a number of clever tactics to get around BlastDoor protections added to make such attacks more difficult, and take over the devices to install the Pegasus implant.

Specifically, the findings from GPZ point out how FORCEDENTRY leveraged a quirk in iMessage's handling of GIF images — a vulnerability in the JBIG2 image compression standard that's used to scan text documents from a multifunction printer — to trick the targets into opening and loading a malicious PDF without requiring any action on their part.

"NSO is only one piece of a much broader global cyber mercenary industry," Agranovich and Dvilyanski added.

Following the revelations, the U.S. government subjected the spyware vendor to economic sanctions, a decision that has since prompted the company to mull a shutdown of its Pegasus unit and a possible sale. "Talks have been held with several investment funds about moves that include a refinancing or outright sale," Bloomberg said in a report published last week.


Facebook to Pay Hackers for Reporting Data Scraping Bugs and Scraped Datasets
20.12.2021 
Social Thehackernews
Meta Platforms, the company formerly known as Facebook, has announced that it's expanding its bug bounty program to start rewarding valid reports of scraping vulnerabilities across its platforms as well as include reports of scraping data sets that are available online.

"We know that automated activity designed to scrape people's public and private data targets every website or service," said Dan Gurfinkel, security engineering manager at Meta. "We also know that it is a highly adversarial space where scrapers — be it malicious apps, websites or scripts — constantly adapt their tactics to evade detection in response to the defenses we build and improve."

To that end, the social media giant aims to monetarily compensate for valid reports of scraping bugs in its service and identify unprotected or openly public databases containing no less than 100,000 unique Facebook user records with personally identifiable information (PII) such as email, phone number, physical address, religious, or political affiliation. The only caveat is that the reported data set must be unique and not previously known.

Should the requisite criteria be met, the company said it will take appropriate measures, including legal actions, to remove the data from the non-Meta website. This could also involve reaching out to hosting providers like Amazon, Box, and Dropbox to pull the data set offline, or working with third-party app developers to address server misconfigurations. Reports concerning scraped databases will be rewarded through matched charity donations of the researchers' choosing.

"Our goal is to quickly identify and counter scenarios that might make scraping less costly for malicious actors to execute," Gurfinkel noted, adding "we want to particularly encourage research into logic bypass issues that can allow access to information via unintended mechanisms, even if proper rate limits exist."

The move to curb unauthorized scraping, a technique referring to the practice of extracting data from websites, comes as part of the company's efforts to limit abuse of people's data on its platform in the wake of the infamous Cambridge Analytica data scandal that resulted in the personal information belonging to millions of Facebook users harvested without their consent for political advertising.

That's not all. Earlier this April, the phone numbers of 533 million Facebook users were shared on a cybercrime forum for free, data that was collected by scraping the platform. In October 2021, Meta filed a lawsuit on Friday against a Ukrainian national named Alexander Alexandrovich Solonchenko for allegedly scraping and selling the personal data of more than 178 million Facebook users on an underground forum.

The company said it has paid out over $14 million in bounties since the inception of the program in 2011, with $2.3 million awarded to researchers from more than 46 countries this year alone. Most of the valid reports over the past 10 years have come from India, the U.S., and Nepal, Meta pointed out.


Facebook Releases New Tool That Finds Security and Privacy Bugs in Android Apps
6.10.21 
Social  Thehackernews
Facebook on Wednesday announced it's open-sourcing Mariana Trench, an Android-focused static analysis platform the company uses to detect and prevent security and privacy bugs in applications created for the mobile operating system at scale.

"[Mariana Trench] is designed to be able to scan large mobile codebases and flag potential issues on pull requests before they make it into production," the Menlo Park-based social tech behemoth said.

In a nutshell, the utility allows developers to frame rules for different data flows to scan the codebase for in order to unearth potential issues — say, intent redirection flaws that could result in the leak of sensitive data or injection vulnerabilities that would allow adversaries to insert arbitrary code — explicitly setting boundaries as to where user-supplied data entering the app is allowed to come from (source) and flow into (sink) such as methods that can execute code and retrieve or interact with user data.

Data flows found violating the rules are then surfaced back either to a security engineer or the software engineer who made the pull request containing the changes.

The social media giant said over 50% of vulnerabilities detected across its family of apps, including Facebook, Instagram, and WhatsApp, were found using automated tools. Mariana Trench also marks the third such service the company has open-sourced after Zoncolan and Pysa, each of which target Hack and Python programming languages, respectively.

The development also follows similar moves from Microsoft-owned GitHub, which acquired Semmle and launched a Security Lab in 2019 with an aim to secure open-source software, in addition to making semantic code analysis tools such as CodeQL freely available to spot vulnerabilities in publicly available code.

"There are differences in patching and ensuring the adoption of code updates between mobile and web applications, so they require different approaches," the company said.

"While server-side code can be updated almost instantaneously for web apps, mitigating a security bug in an Android application relies on each user updating the application on the device they own in a timely way. This makes it that much more important for any app developer to put systems in place to help prevent vulnerabilities from making it into mobile releases, whenever possible."

Mariana Trench can be accessed here via GitHub, and Facebook has also released a Python package on the PyPi repository.


WhatsApp to Finally Let Users Encrypt Their Chat Backups in the Cloud
19.9.21 
Social  Thehackernews

WhatsApp on Friday announced it will roll out support for end-to-end encrypted chat backups on the cloud for Android and iOS users, paving the way for storing information such as chat messages and photos in Apple iCloud or Google Drive in a cryptographically secure manner.

The optional feature, which will go live to all of its two billion users in the coming weeks, is expected to only work on the primary devices tied to their accounts, and not companion devices such as desktops or laptops that simply mirror the content of WhatsApp on the phones.

The development marks an escalation in the growing tussle over encryption technology and meeting law enforcement needs, wherein privacy-preserving technologies have created impenetrable barriers to comply with legal demands to access vast swathes of digital information stored on smartphones and the cloud — a phenomenon referred to as the "going dark" problem.

While the Facebook-owned messaging platform flipped the switch on end-to-end encryption (E2EE) for personal messages, calls, video chats, and media between senders and recipients as far back as April 2016, the content — should a user opt to back them up on the cloud to enable the transfer of chat history to a new device — wasn't subjected to the same security protections, making the backups readable by the cloud providers.

"With the introduction of end-to-end encrypted backups, WhatsApp has created an HSM (Hardware Security Module) based Backup Key Vault to securely store per-user encryption keys for user backups in tamper-resistant storage, thus ensuring stronger security of users' message history," the company said in a whitepaper.

"With end-to-end encrypted backups enabled, before storing backups in the cloud, the client encrypts the chat messages and all the messaging data (i.e. text, photos, videos, etc.) that is being backed up using a random key that's generated on the user's device," it added.

To that end, the device-generated key to encrypt the backup is secured with a user-furnished password, which is stored in the vault to permit easy recovery in the event the device gets stolen. Alternatively, users have the option of providing a 64-digit encryption key instead of a password — but in this scenario, the encryption key will have to be stored manually given that it will no longer be sent to the HSM Backup Key Vault.

Thus when an account owner needs access to their backup, it can be done so with the help of the password or the 64-digit key, which, subsequently, is employed to retrieve the encryption key from the backup key vault and decrypt their backups.

The vault, in itself, is geographically distributed across five data centers and is also responsible for enforcing password verification as well as rendering the key permanently inaccessible after a set threshold for the number of unsuccessful attempts is crossed so as to safeguard against brute-force attacks to retrieve the key by malicious actors.

Unencrypted cloud backups have been a major security loophole using which law enforcement agencies have been able to access WhatsApp chats to gather incriminating evidence pertaining to criminal investigations. In addressing this escape outlet, Facebook is once again setting itself on the warpath with governments across the world, who have decried the company's decision to introduce E2EE across all of its services.

Facebook has since adopted E2EE for Secret Conversations on Messenger and recently extended the feature for voice calls and video calls. In addition, the social media giant is planning a limited test of E2EE for Instagram direct messages.

"WhatsApp is the first global messaging service at this scale to offer end-to-end encrypted messaging and backups, and getting there was a really hard technical challenge that required an entirely new framework for key storage and cloud storage across operating systems," said Facebook's chief executive Mark Zuckerberg in a post.


WhatsApp Photo Filter Bug Could Have Exposed Your Data to Remote Attackers
3.9.21 
Social  Thehackernews
A now-patched high-severity security vulnerability in WhatApp's image filter feature could have been abused to send a malicious image over the messaging app to read sensitive information from the app's memory.

Tracked as CVE-2020-1910 (CVSS score: 7.8), the flaw concerns an out-of-bounds read/write and stems from applying specific image filters to a rogue image and sending the altered image to an unwitting recipient, thereby enabling an attacker to access valuable data stored the app's memory.

"A missing bounds check in WhatsApp for Android prior to v2.21.1.13 and WhatsApp Business for Android prior to v2.21.1.13 could have allowed out-of-bounds read and write if a user applied specific image filters to a specially-crafted image and sent the resulting image," WhatsApp noted in its advisory published in February 2021.

Cybersecurity firm Check Point Research, which disclosed the issue to the Facebook-owned platform on November 10, 2020, said it was able to crash WhatsApp by switching between various filters on the malicious GIF files.

Specifically, the issue was rooted in an "applyFilterIntoBuffer()" function that handles image filters, which takes the source image, applies the filter selected by the user, and copies the result into the destination buffer. By reverse-engineering the "libwhatsapp.so" library, the researchers found that the vulnerable function relied on the assumption that both the source and filtered images have the same dimensions and also the same RGBA color format.

Given that each RGBA pixel is stored as 4 bytes, a malicious image having only 1 byte per pixel can be exploited to achieve an out-of-bounds memory access since the "function tries to read and copy 4 times the amount of the allocated source image buffer."

WhatsApp said it has "no reason to believe users would have been impacted by this bug." Since WhatsApp version 2.21.1.13, the company has added two new checks on the source image and filter image that ensure that both source and filter images are in RGBA format and that the image has 4 bytes per pixel to prevent unauthorized reads.


Experts found two flaws in Facebook for WordPress Plugin
29.3.2021 
Social  Securityaffairs

A critical flaw in the official Facebook for WordPress plugin could be abused exploited for remote code execution attacks.
Researchers at Wordfence have discovered two vulnerabilities in the Facebook for WordPress plugin, which has more than 500,000 active installations. The plugin allows administrators to capture the actions people take while interacting with their page, such as Lead, ViewContent, AddToCart, InitiateCheckout and Purchase events.

“On December 22, 2020, our Threat Intelligence team responsibly disclosed a vulnerability in Facebook for WordPress, formerly known as Official Facebook Pixel, a WordPress plugin installed on over 500,000 sites.” reads the post published by WordFence. “This flaw made it possible for unauthenticated attackers with access to a site’s secret salts and keys to achieve remote code execution through a deserialization weakness.”

The issue, described as a PHP object injection with POP chain, could be exploited by an unauthenticated attacker to access a site’s secret and keys and exploit a deserialization weakness to achieve remote code execution.

The issue could be only exploited by an attacker with a valid nonce because the handle_postback function requires a valid nonce.

“The core of the PHP Object Injection vulnerability was within the run_action() function. This function was intended to deserialize user data from the event_data POST variable so that it could send the data to the pixel console. Unfortunately, this event_data could be supplied by a user.” continues the post. “When user-supplied input is deserialized in PHP, users can supply PHP objects that can trigger magic methods and execute actions that can be used for malicious purposes.”

The experts pointed out that even if a deserialization vulnerability could be relatively harmless when combined with a gadget or magic method would result in “significant damage” to a site. This means that the vulnerability in Facebook for WordPress could be combined with a magic method to upload arbitrary files and get remote code execution.

“This meant that an attacker could generate a PHP file new.php in a vulnerable site’s home directory with the contents . The PHP file contents could be changed to anything, like which would allow an attacker to achieve remote code execution.” continues Wordfence.

The vulnerability was rated as critical severity and received a CVSS score of 9 out of 10.

Experts reported the flaw to the social network giant on December 22, which fixed it on January 6, with the release of a new version.

After Facebook patched the flaw, the security researchers discovered a Cross-Site Request Forgery to Stored Cross-Site Scripting vulnerability in the updated plugin. The flaw was rated as a high-severity and received a CVSS score of 8.8. The flaw was reported to Facebook on January 27 and was addressed on February 26, 2021.

“One of the changes they made while updating the plugin addressed the functionality behind saving the plugin’s settings. This was converted to an AJAX action to make the integration process more seamless. The new version introduced the wp_ajax_save_fbe_settings AJAX action tied to the saveFbeSettings function.” states the advisory. “This function is used to update the plugin’s settings with the Facebook Pixel ID, access token, and external business key. These settings help establish a connection with the Facebook pixel console so that event data can be sent from the WordPress site to the appropriate Facebook pixel account.”

The issue could be exploited by an attacker to update the plugin’s settings and steal metric data for a site, and also inject malicious JavaScript code into the setting values.

These values would then be reflected on the settings page, causing the code to execute in a site administrator’s browser while accessing the settings page. The expert discovered that the code could be used to inject malicious backdoors into theme files or create new administrative user accounts that could allow to take over the site.


Facebook Disrupts Chinese Spies Using iPhone, Android Malware
25.3.2021
Social  Securityweek

Facebook’s threat intelligence team says it has disrupted a sophisticated Chinese spying team that routinely use iPhone and Android malware to hit journalists, dissidents and activists around the world.

The hacking group, known to malware hunters as Evil Eye, has used Facebook to plant links to watering hole websites rigged with exploits for the two major mobile platforms.

Facebook’s Head of Cyber Espionage Investigations Mike Dvilyanski has published an advisory with indicators of compromise (IOCs) and other data to help victims and targets block the attacks.

Dvilyanski said Evil Eye gang has targeted activists, journalists and dissidents predominantly among Uyghurs from Xinjiang and those living abroad in Turkey, Kazakhstan, the United States, Syria, Australia, Canada and other countries.

“This group used various cyber espionage tactics to identify its targets and infect their devices with malware to enable surveillance,” he said, warning that the Evil Eye gang is “a well-resourced and persistent operation.”

Facebook published details on the TTPs (tactics, techniques and procedures) by the group, including precise, selective targeting of victims. “This group took steps to conceal their activity and protect malicious tools by only infecting people with iOS malware when they passed certain technical checks, including IP address, operating system, browser and country and language settings,” he explained.

The group also actively hacks -- or impersonates -- websites that resemble domains for popular Uyghur and Turkish news sites. “They also appeared to have compromised legitimate websites frequently visited by their targets as part of watering hole attacks. Some of these web pages contained malicious javascript code that resembled previously reported exploits, which installed iOS malware known as INSOMNIA on people’s devices once they were compromised,” Dvilyanski said.

Facebook also exposed the use of social engineering with fake accounts to create fictitious personas posing as journalists, students, human rights advocates or members of the Uyghur community to build trust with people they targeted and trick them into clicking on malicious links.

The group has also used fake third party app stores and have been observed outsourcing Android malware development to two Chinese companies. “These China-based firms are likely part of a sprawling network of vendors, with varying degrees of operational security,” Dvilyanski explained.

Facebook has published hashes and domains associated with this threat actor.


Security Analysis Clears TikTok of Censorship, Privacy Accusations
24.3.2021
Social  Threatpost

TikTok’s source code is in line with industry standards, security researchers say.

Nebulous privacy and censorship criticisms about video social-media app TikTok have been swirling for months. Security analysts from CitizenLab are the first to collect real data on the platform’s source code, and reported that TikTok meets reasonable standards of security and privacy.

The platform, they figured out, is a customized version of more intrusive versions of the application used by TikTok’s parent company, China-based parent ByteDance, across East and Southeast Asia, minus the limitations in access or privacy.

CitizenLab explained that the controls ByteDance has put in place for the version of TikTok available in the U.S. are sufficient, “nor [contain] strong deviations of privacy, security and censorship practices when compared to TikTok’s competitors, like Facebook,” the report said.

There are lingering concerns, however, that the source-code capabilities to censor speech on the various ByteDance apps could be “turned on” in the U.S. version of TikTok down the line.

TikTok is the first social-media platform to come out of the Communist country and explode across the globe. TikTok’s rise has been so meteoric, last year it posted the most downloads in a single quarter for any app ever, and crossed more than 2 billion users worldwide.

Last summer, former President Trump threatened to ban TikTok from the U.S., where it has more than 100 million users, and even signed an executive order to block it from app stores due to what he called “national-security concerns.” Then-Commerce Secretary Wilbur Ross added at the time that TikTok allowed “China’s malicious collection of American citizens’ personal data.” Plans to block TikTok were abandoned at the last minute, but questions have lingered.

It turns out those accusations were unfounded, according to these new findings from CitizensLab.

“TikTok and Douyin do not appear to exhibit overtly malicious behavior similar to those exhibited by malware,” the report said. “We did not observe either app collecting contact lists, recording and sending photos, audio, videos or geolocation coordinates without user permission.”

ByteDance: TikTok & Douyin
ByteDance operates two distinct platforms, TikTok and Douyin. ByteDance launched in China with Douyin. In China, it’s understood companies are required to moderate content to comply with government speech restrictions, under threat of being shut down, the report explained.

ByteDance later launched TikTok for markets outside China, in June 2018. Both Douyin and TikTok share much of the same source code, with a few regional distinctions.

“We postulate that ByteDance develops TikTok and Douyin starting out from a common code base and applies different customizations according to market needs,” the CitizenLab report said. “We observed that some of these customizations can be turned on or off by different server-returned configuration values. We are concerned but could not confirm that this capability may be used to turn on privacy-violating hidden features.”

ByteDance acquired Musicl.ly in Nov. 2017.

“It is likely that both apps already accumulated their own user base, and after the merger it was easier to simply upgrade both apps to the new merged-code version, instead of asking users to install another app,” the report said. That left three distinct versions of ByteDance code, Douyin, and two versions of TikTok — known as “Trill” and “Musically.”

“For the parts which we have examined, the differences between Musically and Trill are fewer than the differences between Douyin and the other two,” the report said. “This is expected because Douyin serves a China-only platform separate from the global platform served by regional variants Trill and Musically.”

The Trill version of TikTok is used in East and Southeast Asia and provides tighter privacy and access controls than the Musically version of TikTok, which is available in the West.

“This version distinction is also used to adjust interfaces and provide user settings tailored to the targeted regions,” the report explained. “Users are only given the ability to opt out of ad personalization in Musically, which is likely due to the requirements of the European General Data Protection Regulation (GDPR).”

Other distinctions that the researchers found include the fact that Douyin collected data which could identify a users’ location, while TikTok doesn’t, according to the report.

Dormant Source Code
But rather than these differences being written into the code itself, all three services were set up with controls hard-coded into the internal configuration, leaving dormant strings of code defining privacy and search parameters for other platforms, which could be, in effect, turned on later.

“In the small portion of code which we had examined, we did not find any case in which undesirable features could be enabled by server-returned configuration values,” the researchers said. “However, we are still concerned that this dormant code originally meant for Douyin may be activated in TikTok accidentally, or even intentionally.”

Another potentially problematic aspect of Douyin is that it’s able to update itself via the internet, bypassing the operating system and user control, the research found. TikTok however doesn’t include this capability.

“Overall, TikTok includes some unusual internal designs, but does not otherwise exhibit overtly malicious behavior,” CitizenLabs’ findings concluded. “Douyin’s dynamic code-loading feature can be seen as malicious, as it bypasses the system installation process, but this feature is also commonly seen in Chinese apps and generally accepted in the Chinese market.”

TikTok Censorship Accusations
While the team admits their testing was limited to only the “most popular” posts on TikTok, they were able to conclude the “platform does not enforce obvious post censorship, and if post censorship was enforced at all it would subtly only apply to unpopular posts,” the report added.

Proposed bans on TikTok and WeChat were met with skepticism by some in the security community when early accusations of TikTok abuse emerged, because no evidence ever materialized.

“TikTok hasn’t been shown to collect any more data than other social-media apps,” Paul Bischoff, privacy advocate with Comparitech, told Threatpost last September. “It sets a dangerous precedent of censorship in the U.S. We’re banning a Chinese app but adopting a Chinese censorship policy. The latter is much more concerning.”


Facebook Fails in Bid to Derail $15 Bn Privacy Suit
24.3.2021
Social  Securityweek

The US Supreme Court on Monday declined to consider an appeal by Facebook that would have derailed a $15 billion lawsuit over whether it illegally tracked users about a decade ago.

The nation's top court issued an order denying a request by the leading social network to review a California federal court's decision to allow the litigation accusing Facebook of violating wiretap laws.

Facebook did not respond to a request for comment.

It had argued in court filings that it was a legitimate "party" for exchanges involving digital content received from software tools such as "like" or "share" buttons plugged into other websites.

"Rather than eavesdropping on a separate communication, the communication with Facebook contained distinct content intended for Facebook," the leading social network said in a legal filing.

US wiretap law makes it illegal to snoop on electronic communications unless one is a party to the exchange.

The suit accuses Facebook of wrongly tracking users away from the social network, then making money from the data by selling it to marketers for targeting ads.

The class action lawsuit consolidated more than 20 related cases filed in an array of US states in 2011 and early 2012 and seeks more than $15 billion on behalf of members of the world's largest social network.

Facebook has since changed the way it uses software snippets such as like and share buttons that gather information about users' internet activities.

The Silicon Valley tech giant added that allowing the case to proceed would have "sweeping, and detrimental consequences."

Critics and regulators have repeatedly taken aim at Facebook over user privacy.


CopperStealer Malware Targets Facebook and Instagram Business Accounts
20.3.2021
Social  Threatpost

A previously undocumented password and cookie stealer has been compromising accounts of big guns like Facebook, Apple, Amazon and Google since 2019 and then using them for cybercriminal activity.

A malware that until now has gone undocumented has been quietly hijacking online accounts of advertisers and users of Facebook, Apple, Amazon, Google and other web giants since July 2019 and then using them for nefarious activity, researchers have found.

Dubbed CopperStealer, the malware acts similarly to previously discovered, China-backed malware family SilentFade, according to a report from Proofpoint researchers Brandon Murphy, Dennis Schwarz, Jack Mott and the Proofpoint Threat Research Team published online this week.

“Our investigation uncovered an actively developed password and cookie stealer with a downloader function, capable of delivering additional malware after performing stealer activity,” they wrote.

CopperStealer is in the same class not only as SilentFade—the creation of which Facebook attributed to Hong Kong-based ILikeAD Media International Company Ltd–but also other malware such as StressPaint, FacebookRobot and Scranos. Researchers have deemed Stressfade in particular responsible for compromising accounts of social-media giants like Facebook and then using them to engage in cybercriminal activity, such as running deceptive ads, to the tune of $4 million in damages, researchers noted.

“Previous research from Facebook and Bitdefender has exposed a rapidly increasing ecosystem of Chinese-based malware focused on the monetization of compromised social media and other service accounts,” they wrote. “Findings from this investigation point towards CopperStealer being another piece of this everchanging ecosystem.”

Specifically, researchers analyzed a sample of the malware targeting Facebook and Instagram business and advertiser accounts. However, they also identified additional versions of CopperStealer that target other major service providers, including Apple, Amazon, Bing, Google, PayPal, Tumblr and Twitter, they said.

Proofpoint researchers discovered CopperStealer after they observed suspicious websites advertised as “KeyGen” or “Crack” sites–including keygenninja[.]com, piratewares[.]com, startcrack[.]com, and crackheap[.]net–hosting samples delivering multiple malware families that included CopperStealer.

The sites purported to offer “cracks,” “keygen” and “serials” to circumvent licensing restrictions of legitimate software, researchers noted. What they provided instead were Potentially Unwanted Programs/Applications (PUP/PUA) or malicious executables capable of installing and downloading additional payloads, they said.

Proofpoint researchers worked with Facebook, Cloudflare and other service providers to disrupt and intercept CopperStealer so they could learn its ways, they said. This activity included Cloudflare “placing a warning interstitial page in front of the malicious domains and establishing a sinkhole for two of the malicious domains before they could be registered by the threat actor,” researchers wrote. The sinkhole limited threat actors’ ability to collect victim data while providing insight for researchers into victim demographics as well as the malware’s behavior and scope.

That researchers found was that although CopperStealer is not very sophisticated and has only “basic capabilities,” it can pack a punch. In the first 24 hours of operation, the sinkhole logged 69,992 HTTP Requests from 5,046 unique IP addresses originating from 159 countries and representing 4,655 unique infections, they found. The top five countries impacted by the malware based on unique infections were India, Indonesia, Brazil, Pakistan and The Philippines, they said.

In its attacks, CopperStealer retrieves a download configuration from the c2 server that extracts an archive named “xldl.dat,” which appears to be a legitimate download manager called Xunlei from Xunlei Networking Technologies Ltd. that was previously linked to malware in 2013. CopperStealer then uses an API exposed from the Xunlei application in order to download the configuration for the follow-up binary, researchers wrote.

One of the payloads researchers discovered CopperStealer to deliver most recently is Smokeloader, a modular backdoor. However, historically the malware has used a variety of payloads delivered from a handful of URLs, researchers said.

Proofpoint researchers will continue to help disrupt CopperStealer’s current activities as well as monitor the threat landscape to identify and detect future evolutions of the malware, they said.


Facebook Paid Out $50K for Vulnerabilities Allowing Access to Internal Systems
20.3.2021
Social  Securityweek

A researcher says he has earned more than $50,000 from Facebook after discovering vulnerabilities that could have been exploited to gain access to some of the social media giant’s internal systems.

Cybersecurity engineer and bug bounty hunter Alaa Abdulridha revealed in December 2020 that he had earned $7,500 from Facebook for discovering a vulnerability in a service apparently used by the company’s legal department. The researcher said the security hole could have been exploited to reset the password of any account for a web application used internally by Facebook employees.

Internal Facebook app hacked

In a blog post published on Thursday, the researcher said he continued analyzing the same application and once again managed to gain access to it. From there he claimed he was able to launch a server-side request forgery (SSRF) attack and gain access to Facebook’s internal network. Facebook described this as an attacker being able to send HTTP requests to internal systems and read their responses.

“I was able to scan the ports of the local servers and browse the local applications/web apps that the company uses in their infrastructure,” the researcher told SecurityWeek. “I'm sure such a vulnerability in the wrong hands could be escalated to RCE and can pose a huge risk for the company and its customers.”

The social media giant awarded him nearly $50,000 for this second exploit chain.

Abdulridha also claimed the account takeover attack may have allowed a hacker to access accounts for other internal Facebook applications as well, but Facebook told SecurityWeek it had not found any evidence to suggest that the flaw could be escalated to access other internal accounts.

Facebook has clarified that the vulnerabilities reported by Abdulridha actually affected a third-party service designed for signing documents and they impacted anyone using this service, not just Facebook. The company said it worked with the third-party vendor to quickly get the flaws fixed and said it had found no evidence of malicious exploitation, noting that exploiting the weaknesses was a complex task.

The company also pointed out that the first vulnerability only allowed access to accounts within the third-party document signing app, but did not grant access to any employee accounts used for other internal applications.

While the researcher claimed that it took Facebook nearly 6 months to patch the second round of vulnerabilities, the company told SecurityWeek that while the report was only closed in February, the bugs were actually completely fixed — by both Facebook and the third-party vendor — within a few days.

Facebook also said that while it paid out a bug bounty based on the maximum possible impact it could determine, it did not agree with the researcher’s belief that the SSRF vulnerabilities could have been escalated to remote code execution.


Facebook Now Lets Mobile Users Secure Accounts with Security Keys
19.3.2021
Social  Securityweek

Social media and advertising giant Facebook today announced that it is now allowing mobile users to secure their accounts with the help of security keys.

Available for Facebook’s desktop users since 2017, the authentication method requires that the user confirm authentication requests with the help of a physical security key.

This additional authentication step is meant to significantly increase account protection, as it relies on the use of a physical device that an attacker is assumed to never have access to.

“Starting today, you can set up two-factor authentication and log into Facebook on iOS and Android mobile devices using a security key, available to anyone in the world,” Facebook announced.

Two-factor authentication (2FA) has evolved from codes sent via SMS or email to the use of authenticator applications and security keys, making it increasingly difficult for a threat actor to come in the possession of both the account password and the second factor.

Since 2017, Facebook has been providing users the option to enable 2FA and choose physical security keys as the second authentication factor, with that feature now available for iOS and Android users as well.

Users who may need such strong authentication protection are those most exposed to malicious attacks, including public figures, politicians, journalists, and human rights defenders, among others.

“We strongly recommend that everyone considers using physical security keys to increase the security of their accounts, no matter what device you use,” Facebook says.

Security keys can be connected either via Bluetooth or can be directly plugged into phones.

To enable the use of security keys as the authentication method, Facebook users should head over to the Security and Login section of the Settings menu.

The social platform also says it plans to expand the availability of its Facebook Protect program to include additional at-risk groups, alongside political campaigns and candidates.


Teen Behind Twitter Bit-Con Breach Cuts Plea Deal
18.3.2021
Social  Threatpost

The ‘young mastermind’ of the Twitter hack will serve three years in juvenile detention.

Thanks to a new plea deal with the Florida State Attorney’s Office, the 18-year-old behind last summer’s breach of Twitter’s high-profile accounts will not be charged as an adult, and instead will serve his sentence in juvenile detention.

Graham Ivan Clark was arrested seven months ago, and has accepted responsibility the July “Bit-Con” Twitter breach. He will spend the next six years under supervision — three years in juvenile detention and three years of probation — which is the maximum number of years of supervision permitted by Florida’s Youthful Offender Act, the State Attorney’s Office said in a statement.

However, if Clark violates probation, he will face a minimum of 10 years in adult prison, prosecutors said. He turned 18 in January, and will be under supervision until 2026, when he will be 23 years old, they added.

Clark’s Twitter Bit-Con
On July 15, Clark breached Twitter’s internal systems to take over the accounts of some of the platform’s most famous verified accounts, including those of Barack Obama, Bill Gates Elon Musk and Apple. Clark then asked their followers to send Bitcoin to an account he controlled, which allowed Clark to steal more than $117,000.

Clark was charged with co-defendants Mason Sheppard and Nima Fazlei, but he was identified by law enforcement as the “young mastermind.”

“He took over the accounts of famous people, but the money he stole came from regular, hard-working people,” Hillsborough State Attorney Andrew Warren said. “Graham Clark needs to be held accountable for that crime, and other potential scammers out there need to see the consequences. In this case, we’ve been able to deliver those consequences while recognizing that our goal with any child, whenever possible, is to have them learn their lesson without destroying their future.”

In all, 130 accounts were hijacked because of a mobile spear-phishing campaign targeting Twitter employees.

“This attack relied on a significant and concerted attempt to mislead certain employees, and exploit human vulnerabilities, to gain access to our internal systems,” Twitter said in its update from last July. “This was a striking reminder of how important each person on our team is in protecting our service.”

Cybercrime Investigators Flex
Clark was facing 30 felony charges stemming from the Twitter takeover scam, including organized and communications fraud, and fraudulent use of personal information, which would have meant years more in detention.

The State Attorney’s Office said the time Clark has already spent incarcerated will be applied to his sentence.

The plea deal means Clark accepts responsibility for the “wide range of hacking and social-engineering techniques to defeat security protocols at Twitter,” according to the prosecutor’s statement.

Authorities want to send the message that they are on the lookout for cybercrime and equipped to arrest, charge and convict would-be threat actors.

“Because of the expertise and dedication of our cybercrime investigators, working with State Attorney Warren’s Office and the FBI, we were able to recover the stolen Bitcoin so it can be returned to the victims,” Florida Department of Law Enforcement (FDLE) Commissioner Rick Swearingen said. “I thank our FDLE agents and federal partners for their work quickly unraveling this case and hope it serves as a warning to potential hackers that if you commit a computer crime, our FDLE agents will find you.”


US Teen 'Mastermind' in Epic Twitter Hack Sentenced to Prison
18.3.2021
Social  Securityweek

A Florida teenager accused of masterminding a Twitter hack of celebrity accounts in a crypto currency scheme has been sentenced to three years in juvenile prison in a plea agreement, officials said.

State prosecutors announced the deal Tuesday in the case of Graham Ivan Clark, 18, described as the mastermind of the July 2020 "Bit-Con" worldwide hack of Twitter accounts of Elon Musk, Bill Gates, Barack Obama, Joe Biden and others.

Hillsborough County State Attorney Andrew Warren said Clark, who was 17 when he was charged, would serve three years in a juvenile prison followed by three years probation, the maximum allowed under Florida's Youthful Offender Act.

If Clark violates his probation, he will face a minimum 10-year sentence in adult prison.

The hack, which resulted in federal charges against three other people, hijacked the celebrity accounts and asked followers of them to send bitcoin to an account, promising to double their money.

"He took over the accounts of famous people, but the money he stole came from regular, hard-working people," Warren said.

"Graham Clark needs to be held accountable for that crime, and other potential scammers out there need to see the consequences."

Warren added that the "our goal with any child, whenever possible, is to have them learn their lesson without destroying their future," and offers him a chance at rehabilitation.

The case was investigated by federal authorities but Clark was turned over the to state because he was a juvenile at the time.

According to prosecutors, Clark used his access to Twitter's internal systems to take over the accounts of several companies and celebrities and involved a combination of "technical breaches and social engineering," netting some $100,000.

Twitter said at the time that the July 15 incident stemmed from a "spear phishing" attack which deceived employees about the origin of the messages.

The hack affected at least 130 accounts, including that of Biden while he was a candidate for president.


Twitter Users Can Now Secure Accounts With Multiple Security Keys
17.3.2021
Social  Securityweek

Twitter on Monday announced that users with two-factor authentication (2FA) enabled can now use multiple security keys to protect their accounts.

The social platform has had support for security keys for desktop users for some time, and made the feature available to iOS and Android users too in December 2020.

Now, the company allows users to take advantage of multiple security keys when securing their accounts, regardless of whether on a mobile device or on desktop.

“Secure your account (and that alt) with multiple security keys. Now you can enroll and log in with more than one physical key on both mobile and web,” the company announced.

To use security keys for account protection, users need to enable 2FA via text message or authentication application, select Security Key, and then enter their passwords when prompted, to begin the setup process.

After clicking Start, users can connect their physical security key, either via a USB port or via Bluetooth, after which they will need to touch the button on the key and then follow the on-screen steps to complete the setup process.

Security keys that have been added are displayed in the “Manage security keys” section, under “Two-factor authentication,” allowing users to easily manage them (rename, delete, or add new ones, as needed).

The social platform also points out that the latest version of a supported browser (including Chrome, Edge, Firefox, Opera, and Safari) is needed to add or log in to a Twitter account with a security key.

Twitter also revealed that it will soon provide users with the ability to employ security keys for authentication even if they do not have other methods enabled. However, the company hasn’t provided a specific timeframe for when the feature will become available.


Facebook Halts Project for Undersea Data Cable to Hong Kong
12.3.2021
Social  Securityweek

Facebook has decided to halt its efforts to build a trans-Pacific undersea cable that would have connected California and Hong Kong, due to tensions between the United States and China.

"Due to ongoing concerns from the US government about direct communication links between the United States and Hong Kong, we have decided to withdraw our FCC application," a Facebook spokesperson told AFP on Wednesday, referring to the Federal Communications Commission.

"We look forward to working with all the parties to reconfigure the system to meet the concerns of the US government," the spokesperson added.

The social networking giant and several telecom companies filed their first construction permit in 2018, to connect two sites in California to Hong Kong and Taiwan.

The project was supposed to facilitate communications through fiber optics capable of carrying large volumes of data with very low waiting times.

But Washington resisted, because of perceived potential national security risks regarding China, which has tightened its control over Hong Kong.

In June, the US Department of Justice recommended that a trans-Pacific undersea cable proposed by Google and Facebook bypass Hong Kong.

The cable, named the Pacific Light Cable Network, was originally intended to link the United States, Taiwan, Hong Kong and the Philippines.

The Hong Kong landing station "would expose US communications traffic to collection" by Beijing, the department said.

The FCC gave Google permission in April 2020 to operate the link between North America and Taiwan.


South Africa Opposes WhatsApp-Facebook Data Sharing
6.3.2021
Social  Securityweek

South Africa's information regulator has protested WhatsApp's plans to share user data with Facebook, vowing to engage directly with the popular messaging app to ensure its compliance to national privacy laws.

In January, WhatsApp asked all its users to accept new terms allowing it to share more private information with its parent company Facebook for advertising and e-commerce purposes.

The proposition sparked global outrage, forcing the company to delay its plans and clarify its privacy and security terms.

South Africa opposes WhatsApp-Facebook data sharing

Widespread confusion about WhatsApp's future plans was compounded when it announced that European Union (EU) users would not be forced to agree to share personal information with Facebook.

Non-EU users, meanwhile, have been told they will be partially cut off from the messaging app if they do not accept its revised terms by May 15.

South Africa's Information Regulator (IR) on Wednesday said the new privacy policy violated the country's Protection of Personal Information Act.

"WhatsApp cannot without obtaining prior authorisation from the IR... process any contact information of its users for a purpose other than the one for which the number was specifically intended at collection," the IR said in a statement.

The regulator added that it was "very concerned" about EU users gaining higher privacy protection than their counterparts in Africa.

"Our legislation is very similar to that of the EU," noted IR Chair Pansy Tlakula.

"We do not understand why Facebook has adopted this differentiation between Europe and Africa."

The IR said it had invited Facebook to "a round-table discussion regarding the issues raised" to ensure full compliance of the new terms with South African law.

Under the new terms, merchants using WhatsApp to chat with customers will be able to share data with Facebook, allowing the social networking platform to better target its advertisements.

WhatsApp has defended the policy, claiming it is simply building new ways to chat or shop with businesses "that are entirely optional".


Passwords, Private Posts Exposed in Hack of Gab Social Network

2.3.2021  Social  Threatpost

The Distributed Denial of Secrets group claim they have received more than 70 gigabytes of data exfiltrated from social media platform Gab.

Distributed Denial of Secrets (DDoSecrets), a self-proclaimed “transparency collective,” claim they have received more than 70 gigabytes of data exfiltrated from social media network Gab.

Gab, which touts itself as “a social network that champions free speech, individual liberty and the free flow of information online” has drawn in various alt-right and far-right users. A hacker was reportedly able to obtain the exposed data through an SQL injection vulnerability in the site, DDoSecrets claims.

Wired, which said they viewed a sample of the data, said that the data appears to include both individual and group profiles for Gab users, as well as hashed account passwords and 40 million public and private posts. These profiles include users’ descriptions and privacy settings, they said.

DDoSecrets said they received the files from someone calling themselves “JaXpArO and my Little Anonymous Revival Project.” The group explained in a statement released to DataBreaches.net, “Distributed Denial of Secrets had no role in the compromise of Gab or any other service, and did not crack any password hashes, use any of the plaintext group passwords, or otherwise compromise anyone’s account,” they wrote.

“Early in the review process, we made the decision to limit the distribution of the dataset to both protect the privacy of innocent Gab users and the integrity of their accounts and private groups,” they said.

Gab CEO Admits Breach
Gab CEO Andrew Torba initially denied the breach in a statement on Gab’s website, but has since acknowledged it occurred in a statement on Twitter (punctuated with a transphobic slur against the group, calling them “demon hackers”).

Torba said the company was aware of a vulnerability “in this area and patched it last week.” The company is also proceeding to undertake a full security audit, he said.

“The entire company is all hands investigating what happened and working to trace and patch the problem,” Torba said in a statement on Feb. 28. He added the leaked passwords were hashed for security.

‘Gold Mine’ for Investigators into Jan. 6 Attack
The breach, which DDoSecrets calls GabLeaks, is aimed at exposing that platform’s most dangerous users, they said. Best opines, turning over the data is in the public interest.

“It’s another gold mine of research for people looking at militias, neo-Nazis, the far right, QAnon, and everything surrounding January 6,” Best told Wired about the trove of data.

Affected users, according to Wired, reportedly include former president Trump, Qanon-sympathetic freshman Congresswoman Marjorie Taylor-Greene, My Pillow CEO Mike Lindell and radio host Alex Jones.

Following the Jan. 6 Capitol attacks, when social media platforms including Twitter and Facebook banned the account of President Donald Trump and some of his most fervent supporters, many of those users flocked to Gab. The same was true after Amazon stopped hosting Parler, a preferred destination for Qanon conspiracy theorists, white nationalists and other alt-right groups.

DDoSecrets Gears Up With Data Leaks
The Gab release is just the latest leak from DDoSecrets, which appears to be ramping up its operations. DDoS secrets has also recently released data exfiltrated from around 120,000 Myanmar corporations in the wake of the military coup against the country’s government, and published a massive leak of law enforcement data, dubbed BlueLeaks, in June.

DDoSecrets is poised to pick up right where WikiLeaks left off, according to a Wired report on the group from last summer. In 2018, they published emails between Russian leaders and oligarchs, and in 2019, they released hacked emails from a London financial firm known for money laundering.


Judge Approves $650M Facebook Privacy Lawsuit Settlement
1.3.2021 
Social  Securityweek

A federal judge on Friday approved a $650 million settlement of a privacy lawsuit against Facebook for allegedly using photo face-tagging and other biometric data without the permission of its users.

U.S. District Judge James Donato approved the deal in a class-action lawsuit that was filed in Illlinois in 2015. Nearly 1.6 million Facebook users in Illinois who submitted claims will be affected.

Donato called it one of the largest settlements ever for a privacy violation.

“It will put at least $345 into the hands of every class member interested in being compensated,” he wrote, calling it “a major win for consumers in the hotly contested area of digital privacy.”

Jay Edelson, a Chicago attorney who filed the lawsuit, told the Chicago Tribune that the checks could be in the mail within two months unless the ruling is appealed.

“We are pleased to have reached a settlement so we can move past this matter, which is in the best interest of our community and our shareholders,” Facebook, which is headquartered in the San Francisco Bay Area, said in a statement.

The lawsuit accused the social media giant of violating an Illinois privacy law by failing to get consent before using facial-recognition technology to scan photos uploaded by users to create and store faces digitally.

The state’s Biometric Information Privacy Act allowed consumers to sue companies that didn’t get permission before harvesting data such as faces and fingerprints.

The case eventually wound up as a class-action lawsuit in California.

Facebook has since changed its photo-tagging system.


Twitter Shuts Down Four Networks of State-Sponsored Disinformation Accounts
25.2.2021
Social  Securityweek

Twitter this week announced that it has suspended multiple accounts that were found to be part of four networks involved in disinformation activities associated with Armenia, Iran, and Russia.

The threat actors behind these accounts are believed to be state-sponsored, and Twitter permanently suspended all four networks, for violating its manipulation policies.

The Iran-linked accounts, the social media platform says, were part of a network that was initially dismantled in October 2020. Roughly 130 accounts part of that network were suspended at the time, based on information provided by the FBI.

The accounts were “attempting to disrupt the public conversation during the first 2020 US Presidential Debate,” Twitter says. The investigation into the network has resulted in an additional 108 accounts operating from Iran being suspended.

The accounts in this network, Twitter underlines, had low engagement, with little impact on public conversation.

A total of 35 accounts linked to the government of Armenia were recently removed from Twitter, all created to “advance narratives that were targeting Azerbaijan and were geostrategically favorable to the Armenian government,” the company says.

Some of the fake accounts claimed to represent government and political figures or news entities in Azerbaijan. The network engaged in spam campaigns to attract followers and amplify the narrative.

Other networks that were recently suspended were found to be linked to Russia, Twitter reveals.

Consisting of 69 fake accounts, the first of the networks was tied to Russian state actors and was meant to amplify narratives aligned with the interests of the Russian government. A subset of the network was focused on “undermining faith in the NATO alliance and its stability.”

An additional 31 accounts that were part of two other networks were believed to be affiliated with the Internet Research Agency (IRA) and with Russian government-linked actors.

“These accounts amplified narratives that had been previously associated with the IRA and other Russian influence efforts targeting the United States and European Union,” Twitter explains.

All of these accounts have been added to Twitter’s archive of state-linked information operations, allowing researchers to conduct their own investigations and analysis. Since October 2018, Twitter has suspended more than 85,000 accounts associated with manipulation campaigns.


Twitter removes 100 accounts linked to Russia disseminating disinformation
24.2.2021
Social  Securityaffairs

Twitter removed dozens of accounts allegedly used by Russia-linked threat actors to disseminate disinformation and target western countries.
Twitter has removed dozens of accounts used by Russia-linked threat actors that were used to disseminate disinformation and to target the European Union, the United States, and the NATO alliance.

Experts believe the accounts were part of two separate clusters that were operated by Russian actors and that targeted different entities.

A first cluster composed of 69 fake accounts, part of these accounts were used to amplify narratives that were aligned with the politics of the Russian government, while a second subset was focused on undermining faith in the NATO alliance and its stability.

The second Russian-linked disinformation network was composed of 31 accounts, from two distinct networks allegedly affiliated with the Internet Research Agency (IRA) and Russian government-linked actors. The accounts were used to amplify narratives that had been previously associated with the IRA and other Russia-linked organizations. The accounts were involved in disinformation campaigns targeting the United States and European Union.

“Our first investigation found and removed a network of 69 fake accounts that can be reliably tied to Russian state actors.” reads the post published by Twitter. “As part of our second investigation in this region, we removed 31 accounts from two networks that show signs of being affiliated with the Internet Research Agency (IRA) and Russian government-linked actors.”

Twitter also removed other networks of accounts employed in disinformation operations conducted by nation-state actors. 100 accounts were linked to Russia, 35 to Armenia, 130 to Iran.

“Today we are disclosing four networks of accounts to our archive of state-linked information operations; the only archive of its kind in the industry. The networks we are disclosing relate to independent, state-affiliated information operations that we have attributed to Armenia, Russia and a previously disclosed network from Iran.” Twitter conclues.

“Since we launched our first archive in October 2018, we have disclosed data related to more than 85,000 accounts associated with platform manipulation campaigns originating from 20 countries, to our information operations archive.”


Complaint Blasts TikTok’s ‘Misleading’ Privacy Policies
17.2.2021
Social  Threatpost

TikTok is again in hot water for how the popular video-sharing app collects and shares data – particularly from its underage userbase.

An umbrella group comprising 44 consumer-privacy watchdog organizations have filed a complaint against TikTok, saying the wildly-popular video-sharing platform has “misleading” data-collection policies.

ByteDance-owned TikTok has skyrocketed in popularity, with more than 2 billion downloads on the Google Play and Apple App Store marketplaces. The complaint was filed by the European Consumer Organisation (BEUC), made up of consumer-privacy watchdog groups from 32 countries. The BEUC says, its goal is to ensure the European Union makes policy decisions to “improve the lives of consumers.”

According to the complaint, TikTok’s lack of data-collection transparency — particularly as it affects the platform’s large juvenile userbase — is potentially in violation of the EU’s General Data Protection Regulation (GDPR) data privacy regulations. The complaint was filed with the European Commission (the executive branch of the European Union, responsible for proposing legislation and implementing decisions) and a “network of consumer protection authorities.”

“TikTok does not clearly inform its users, especially children and teenagers, about what personal data is collected, for what purpose and for what legal reason,” said the BEUC, in a report released Tuesday, along with the complaint. “These practices are problematic inter alia as they do not allow consumers to make a fully informed decision about whether to register to the app and/or to exercise their rights under the GDPR.”

A TikTok spokesperson told Threatpost that an in-app summary of TikTok’s Privacy Policy has been developed “with vocabulary and a tone of voice that makes it easier for teens to understand our approach to privacy.”

“We’re always open to hearing how we can improve, and we have contacted BEUC as we would welcome a meeting to listen to their concerns,” the TikTok spokesperson told Threatpost.

TikTok: ‘Unclear’ Data-Collection Policy
The complaint claims that TikTok’s terms of use and privacy policies provide unclear privacy statements about how it collects and shares data. For instance, TikTok’s privacy policy does not provide an “exact list” of companies who receive the data that TikTok collects and shares (beyond indicating data is shared with broad categories of cloud storage providers, business partners, content moderation services and such).

Other details are not specified in TikTok’s privacy policy, said the BEUC – for instance, it does not provide information regarding the countries to which data is transferred (other than stating that data will be stored at a destination outside of the “European Economic Area”); and under which legal basis that location data is processed.

The BEUC also alleged that TikTok’s privacy policy (particularly for users aged 13 to 18) is difficult to access. For example, in order to access the privacy policy, users must have an existing account – meaning “the essential information is therefore not given to children and teenagers upon registration and at the pre-contractual stage,” said the BEUC.

The Impact on TikTok’s Young User Base
The report highlighted that a large part of TikTok’s userbase is made up of children. For instance, in the United States, a report found that more than one-third of daily TikTok users are 14 or younger – with many videos seeming to come from children who are below 13.

As such, TikTok needs to “clearly inform its users, especially in a way comprehensible to children and teenagers, about what personal data is collected, for what purpose and for what legal reason,” according to the BEUC.

“We consider that some of these, as well as other…practices are potentially in breach of the General Data Protection Regulation and have brought them to the attention of Data Protection Authorities in the context of their ongoing investigations into the company,” said the BEUC.

TikTok has previously found itself in hot water when it comes to its younger user base. In May, a group of privacy advocates filed a complaint with the Federal Trade Commission (FTC) alleging the platform failed to adequately protect children’s privacy.

But the social-media platform has also sought to improve privacy for its teen users by changing the privacy settings for all registered accounts under the ages of 16, so that they are private by default. A limited TikTok app for users under 13 was also launched last year and is partnering with parent watchdog group Common Sense in an effort to deliver appropriate videos for younger TikTok-ers.

“Keeping our community safe, especially our younger users, and complying with the laws where we operate are responsibilities we take incredibly seriously,” the TikTok spokesperson told Threatpost. “Every day we work hard to protect our community which is why we have taken a range of major steps, including making all accounts belonging to users under 16 private by default.”

Other TikTok Toils Outlined by Privacy Watchdogs
The complaint outlined an array of other issues with the TikTok app beyond its privacy policy. For instance, the BEUC claims that TikTok does not do a good job making marketing efforts obvious to its younger userbase. And, it is potentially failing to conduct due diligence when it comes to protecting children from inappropriate content – such as videos showing suggestive content, argued the BEUC.

The BEUC also took issue with TikTok’s “virtual item policy,” where users can purchase coins that they can use as virtual gifts for TikTok celebrities whose performances they like. TikTok claims an “absolute right” to modify the exchange rate between the coins and gifts – which the BEUC said is “misleading” and could potentially allow the company to skew financial transactions in its own favor.

Finally, TikTok’s terms of service are “unclear, ambiguous and favor TikTok to the detriment of its users,” said the BEUC. “Its copyright terms are equally unfair as they give TikTok an irrevocable right to use, distribute and reproduce the videos published by users, without remuneration,” according to the BEUC.

What’s Next for TikTok
As part of its complaint, the BEUC wants authorities to launch a comprehensive investigation into TikTok’s policies and practices.

“Together with our members — consumer groups from across Europe — we urge authorities to take swift action,” Monique Goyens, director general at the BEUC, said in a statement. “They must act now to make sure TikTok is a place where consumers, especially children, can enjoy themselves without being deprived of their rights.”

TikTok has previously come under fire for various security and privacy problems – even last year facing a threat of a ban in the United States out of fear that the app was surreptitiously collecting data on U.S. government employees and contractors to use in China’s cyber-activities against the United States.

A vulnerability in TikTok, disclosed in January, could have allowed attackers to easily compile users’ phone numbers, unique user IDs and other data ripe for phishing attacks. Researchers in September disclosed four high-severity flaws in the Android version of TikTok that could have easily been exploited by a seemingly benign third-party Android app.

On the privacy front, in August TikTok was found to be collecting unique identifiers from millions of Android devices without their users’ knowledge using a tactic previously prohibited by Google because it violated people’s privacy.

“TikTok is walking the well-trodden path of other social media products that have access to huge swathes of personal information and have limited justifications other than the legitimate interests which is often cited as a response to GDPR but gets more complicated when the data doesn’t relate to adults,” Andrew Barratt, managing principal of Solutions and Investigations at Coalfire, told Threatpost. “Ultimately it would be beneficially to see regulators take a standards based approach to privacy rather than complex contractual and legal position,” he added.


Telegram flaw could have allowed access to users secret chats
17.2.2021
Social  Securityaffairs

Experts at Shielder disclosed a flaw in the Telegram app that could have exposed users’ secret messages, photos, and videos to remote attackers.
Researchers at cyber security firm Shielder discovered a critical flaw affecting iOS, Android, and macOS versions of the instant messaging app Telegram.

The experts discovered that sending a sticker to a Telegram user could have exposed his secret chats, photos, and videos to remote attackers.

In 2019, Telegram had introduced in animated stickers, this was the starting point for the investigation of the experts. The “rlottie” folder caught their attention, it was the folder used for the Samsung native library for playing Lottie animations, originally created by Airbnb.
The experts discovered multiple flaws affecting the way secret chat functionality is implemented and Telegram was handling animated stickers, An attacker could have exploited the flaw by sending malformed stickers to unsuspecting users and gain access to messages, photos, and videos that were exchanged through both classic and secret chats.
“What follows is my journey in researching the lottie animation format, its integration in mobile apps and the vulnerabilities triggerable by a remote attacker against any Telegram user. The research started in January 2020 and lasted until the end of August, with many pauses in between to focus on other projects.” reads the analysis published by Shielder experts.

“During my research I have identified 13 vulnerabilities in total: 1 heap out-of-bounds write, 1 stack out-of-bounds write, 1 stack out-of-bounds read, 2 heap out-of-bound read, 1 integer overflow leading to heap out-of-bounds read, 2 type confusions, 5 denial-of-service (null-ptr dereferences).”

The experts used a fuzzy approach to test the Samsung’s C++ library rlottie to parse Lottie animations and triaging the crashes. This library was used by Telegram developers instead of the Airbnb’s one.

“It’s important to note here also that Telegram developers chose to fork the rlottie project and maintain multiple forks of it, which makes security patching especially hard.” continues the report. “This will turn out to be an additional problem since the Samsung’s rlottie developers do not track security issues caused by untrusted animations in their project because they are not “the intended use case for rlottie” (quote from https://gitter.im/rLottie-dev/community ).”

Once launched the AFL-fuzz, experts observed multiple crashes some of them were caused by serious issues, including heap-based out-of-bounds read/write, stack-based out-of-bounds write and high-address SEGVs.

Telegram has addressed the flaw with the release of security updates on September 30 and October 2, 2020.

Shielder decided to give 90 days before publicly disclose their findings to give users the time to update their devices.

“Today I shared with you the story of how I have found 13, some with a higher impact than others but all which were promptly fixed by Telegram for all the device families supporting secret chats: Android, iOS and macOS.” concludes the experts. “This research helped me understand once more that it’s not trivial to limit attack surfaces at scale in end-to-end encrypted contexts without losing functionalities.”

I suggest reading the step by step analysis published by Shielder.

Last week, security researcher Dhiraj Mishra reported a bug in Telegram macOS app that made it possible to access self-destructing audio and video messages long after they disappeared from secret chats.


Facebook Announces Payout Guidelines for Bug Bounty Program
17.2.2021
Social  Securityweek

Facebook on Tuesday announced several new features for its bug bounty program, including an educational resource and payout guidelines.

The payout guidelines provide insight into the process used by the company to determine rewards for certain vulnerability categories. Specifically, it provides information on the maximum bounty for each category and describes the mitigating factors that can result in a lower reward.

Payment guidelines are currently available for page admin vulnerabilities, for which the top bounty is $5,000, server-side request forgery (SSRF), with a maximum reward of $40,000, and bugs in mobile apps, for which the bounty is capped at $45,000.

For example, payouts are lowered depending on whether and how much user interaction is required for exploitation. There are several mitigating factors in each category.

The social media giant also announced the launch of Facebook Bug Bounty Academy, a resource whose goal is to provide information for bug bounty hunters on the best ways to test the company’s services and how to improve their chances of finding valid vulnerabilities.

“Our goal is to provide a launchpad for new researchers beginning to hunt on the Facebook program and explain the specific aspects of this program that make it different from other bug bounty programs,” Facebook said. “The first release of knowledge articles provides advice on how to write reports, avoid common false positives, and a guide on how to set up accounts and test environments.”

Facebook also informed researchers on Tuesday that its Lite apps will also include features designed for vulnerability research, such as the option to disable certificate pinning, fizz support, and network traffic compression.

The social media company announced in November that it had paid out more than $11.7 million in bug bounties since the launch of its program in 2011, including nearly $2 million in 2020.


A Sticker Sent On Telegram Could Have Exposed Your Secret Chats
16.2.2021  Social  Thehackernews

Cybersecurity researchers on Monday disclosed details of a now-patched flaw in the Telegram messaging app that could have exposed users' secret messages, photos, and videos to remote malicious actors.

The issues were discovered by Italy-based Shielder in iOS, Android, and macOS versions of the app. Following responsible disclosure, Telegram addressed them in a series of patches on September 30 and October 2, 2020.

The flaws stemmed from the way secret chat functionality operates and in the app's handling of animated stickers, thus allowing attackers to send malformed stickers to unsuspecting users and gain access to messages, photos, and videos that were exchanged with their Telegram contacts through both classic and secret chats.

password auditor
One caveat of note is that exploiting the flaws in the wild may not have been trivial, as it requires chaining the aforementioned weaknesses to at least one additional vulnerability in order to get around security defenses in modern devices today. That might sound prohibitive, but, on the contrary, they are well in the reach of both cybercrime gangs and nation-state groups alike.

Shielder said it chose to wait for at least 90 days before publicly revealing the bugs so as to give users ample time to update their devices.

"Periodic security reviews are crucial in software development, especially with the introduction of new features, such as the animated stickers," the researchers said. "The flaws we have reported could have been used in an attack to gain access to the devices of political opponents, journalists or dissidents."

It's worth noting that this is the second flaw uncovered in Telegram's secret chat feature, following last week's reports of a privacy-defeating bug in its macOS app that made it possible to access self-destructing audio and video messages long after they disappeared from secret chats.

This is not the first time images, and multimedia files sent via messaging services have been weaponized to carry out nefarious attacks.

In March 2017, researchers from Check Point Research revealed a new form of attack against web versions of Telegram and WhatsApp, which involved sending users seemingly innocuous image files containing malicious code that, when opened, could have allowed an adversary to take over users' accounts on any browser completely, and access victims' personal and group conversations, photos, videos, and contact lists.


The “P” in Telegram stands for Privacy
13.2.2021 
Social  Securityaffairs

Security expert Dhiraj Mishra analyzed the popular instant messaging app Telegram and identified some failures in terms of handling the users’ data.
Summary: While understanding the implementation of various security and privacy measures in Telegram, I identified that telegram fails again in terms of handling the users data. My initial study started with understanding how self-destructing messages work in the secret chats option, telegram says that “The clock starts ticking the moment the message is displayed on the recipient’s screen (gets two check marks). As soon as the time runs out, the message disappears from both devices.”

The popular instant messaging app has 500 million active users suffers from a logical bug exists in telegram for macOS (7.3 (211334) Stable) which stores the local copy of received message (audio/video) on a custom path even after those messages are deleted/disappeared from the secret chat.

Technical analysis: Open telegram for macOS, send a recorded audio/video message in normal chat, the application leaks the sandbox path where the recorded message is stored in “.mp4” file.

In my case the path was (/var/folders/x7/khjtxvbn0lzgjyy9xzc18z100000gn/T/). While performing the same task under secret chat option the MediaResourceData(path://) URI was not leaked but the recorded audio/video message still gets stored on the above path.

In the video proof-of-concept the user receives a self-destructed message in the secret chat option, which gets stored even after the message is self-destructed.

Bonus: The above mentioned version of telegram for macOS stores local passcode in plain text, below is the video proof-of-concept.
ttps://www.youtube.com/embed/zEt-_5b4OaA Both the vulnerabilities was patched in version 7.4 (212543) Stable and 3000 EURO bounty was awarded. In past I’ve identified multiple vulnerabilities in Telegram you can read them here. Later today Fri 12 Feb 12:15 PM, CVE-2021-27204 & CVE-2021-27205 was assigned. What next?


Secret Chat in Telegram Left Self-Destructing Media Files On Devices
13.2.2021 
Social  Thehackernews

Popular messaging app Telegram fixed a privacy-defeating bug in its macOS app that made it possible to access self-destructing audio and video messages long after they disappeared from secret chats.

The vulnerability was discovered by security researcher Dhiraj Mishra in version 7.3 of the app, who disclosed his findings to Telegram on December 26, 2020. The issue has since been resolved in version 7.4, released on January 29.

Unlike Signal or WhatsApp, conversations on Telegram by default are not end-to-end encrypted, unless users explicitly opt to enable a device-specific feature called "secret chat," which keeps data encrypted even on Telegram servers. Also available as part of secret chats is the option to send self-destructing messages.

What Mishra found was that when a user records and sends an audio or video message via a regular chat, the application leaked the exact path where the recorded message is stored in ".mp4" format. With the secret chat option turned on, the path information is not spilled, but the recorded message still gets stored in the same location.

In addition, even in cases where a user receives a self-destructing message in a secret chat, the multimedia message remains accessible on the system even after the message has disappeared from the app's chat screen.

"Telegram says 'super secret' chats do not leave traces, but it stores the local copy of such messages under a custom path," Mishra told The Hacker News.

Separately, Mishra also identified a second vulnerability in Telegram's macOS app that stored local passcodes in plaintext in a JSON file located under "/Users/<user_name>/Library/Group Containers/<*>.ru.keepcoder.Telegram/accounts-metadata/."

Mishra was awarded €3,000 for reporting the two flaws as part of its bug bounty program.

Telegram in January hit a milestone of 500 million active monthly users, in part led by a surge in users who fled WhatsApp following a revision to its privacy policy that includes sharing certain data with its corporate parent, Facebook.

While the service does offer client-server/server-client encryption (using a proprietary protocol named "MTProto") and also when the messages are stored in the Telegram cloud, it's worth keeping in mind that group chats offer no end-to-end encryption and that all default chat histories are stored on its servers. This is to make conversations easily accessible across devices.

"So if you are on Telegram and want a truly private group chat, you're out of luck," Raphael Mimoun, founder of the digital security nonprofit Horizontal, said last month.


TikTok privacy issue could have allowed stealing users’ private details
27.1.2021 
Social  Securityaffairs

A vulnerability in the video-sharing social networking service TikTok could have allowed hackers to steal users’ private personal information.
Developers at ByteDance, the company that owns TikTok, have fixed a security vulnerability in the popular video-sharing social networking service that could have allowed attackers to steal users’ private personal information.

Check Point researchers found a vulnerability in Find Friends feature implemented by TikTok that could have allowed the attackers to bypass the service’s privacy protections allowing them to access users’ private personal data.

Profile data that could be accessible exploiting the vulnerability include a phone number, nickname, profile and avatar pictures, unique user IDs, along with certain profile settings.

“In the recent months, Check Point Research teams discovered a vulnerability within the TikTok mobile application’s friend finder feature. In the vulnerability described in this research an attacker can connect between profile details and phone numbers, while a successful exploitation can enable an attacker to build a database of users and their related phone numbers.” reads the analysis published by CheckPoint researchers. “If exploited, this vulnerability would have only impacted those users who have chosen to associate a phone number with their account (which is not required) or logged in with a phone number.”
Experts focused their activity on all actions related to users’ data and discovered that the mobile app enables contacts syncing. This means that a user can sync his contacts to easily find people he knows on TikTok and this is done by connecting profile details and phone numbers.

The syncing process is composed of 2 requests:

Upload contacts
Syncing contacts
tiktok
The experts provided a step by step exploitation procedure for this privacy issue:

Step 1 – Creating a List of Devices (Registering Physical Devices)
Step 2 – Creating a List of Never Expired Session Tokens
Step 3 – Bypassing TikTok’s HTTP Message Signing
Step 4 –Chaining It All Together to modify HTTP requests and re-sign them.
The experts were able to automate the process of uploading and syncing contacts on a large scale using a short Frida script that allowed them to build a database of users and their connected phone numbers.

The information exfiltrated by exploiting this privacy flaw could be used by attackers to conduct malicious activities, including scams and phishing campaigns.

Check Point helped ByteDance in identifying and addressing the issue.


TikTok Bug Could Have Exposed Users' Profile Data and Phone Numbers
27.1.2021 
Social  Thehackernews
Cybersecurity researchers on Tuesday disclosed a now-patched security flaw in TikTok that could have potentially enabled an attacker to build a database of the app's users and their associated phone numbers for future malicious activity.

Although this flaw only impacts those users who have linked a phone number with their account or logged in with a phone number, a successful exploitation of the vulnerability could have resulted in data leakage and privacy violation, Check Point Research said in an analysis shared with The Hacker News.

TikTok has deployed a fix to address the shortcoming following responsible disclosure from Check Point researchers.

The newly discovered bug resides in TikTok's "Find friends" feature that allows users to sync their contacts with the service to identify potential people to follow.

The contacts are uploaded to TikTok via an HTTP request in the form of a list that consists of hashed contact names and the corresponding phone numbers.

The app, in the next step, sends out a second HTTP request that retrieves the TikTok profiles connected to the phone numbers sent in the previous request. This response includes profile names, phone numbers, photos, and other profile related information.

tiktok security flaw
While the upload and sync contact requests are limited to 500 contacts per day, per user, and per device, Check Point researchers found a way to get around the limitation by getting hold of the device identifier, session cookies set by the server, a unique token called "X-Tt-Token" that's set when logging into the account with SMS and simulate the whole process from an emulator running Android 6.0.1.

It's worth noting that in order to request data from the TikTok application server, the HTTP requests must include X-Gorgon and X-Khronos headers for server verification, which ensures that the messages are not tampered with.

But by modifying the HTTP requests — the number of contacts the attacker wants to sync — and re-signing them with an updated message signature, the flaw made it possible to automate the procedure of uploading and syncing contacts on a large scale and create a database of linked accounts and their connected phone numbers.

This is far from the first time the popular video-sharing app has been found to contain security weaknesses.

In January 2020, Check Point researchers discovered multiple vulnerabilities within the TikTok app that could have been exploited to get hold of user accounts and manipulate their content, including deleting videos, uploading unauthorized videos, making private "hidden" videos public, and revealing personal information saved on the account.

Then in April, security researchers Talal Haj Bakry and Tommy Mysk exposed flaws in TikTok that made it possible for attackers to display forged videos, including those from verified accounts, by redirecting the app to a fake server hosting a collection of fake videos.

Eventually, TikTok launched a bug bounty partnership with HackerOne last October to help users or security professionals flag technical concerns with the platform. Critical vulnerabilities (CVSS score 9 - 10) are eligible for payouts between $6,900 to $14,800, according to the program.

"Our primary motivation, this time around, was to explore the privacy of TikTok," said Oded Vanunu, head of products vulnerabilities research at Check Point. "We were curious if the TikTok platform could be used to gain private user data. It turns out that the answer was yes, as we were able to bypass multiple protection mechanisms of TikTok that lead to privacy violation."

"An attacker with that degree of sensitive information could perform a range of malicious activities, such as spear phishing or other criminal actions."


Beware — A New Wormable Android Malware Spreading Through WhatsApp
26.1.2021 
Social  Thehackernews
Wormable Android Malware
A newly discovered Android malware has been found to propagate itself through WhatsApp messages to other contacts in order to expand what appears to be an adware campaign.

"This malware spreads via victim's WhatsApp by automatically replying to any received WhatsApp message notification with a link to [a] malicious Huawei Mobile app," ESET researcher Lukas Stefanko said.

The link to the fake Huawei Mobile app, upon clicking, redirects users to a lookalike Google Play Store website.

Once installed, the wormable app prompts victims to grant it notification access, which is then abused to carry out the wormable attack.

Specifically, it leverages WhatApp's quick reply feature — which is used to respond to incoming messages directly from the notifications — to send out a reply to a received message automatically.

Besides requesting permissions to read notifications, the app also requests intrusive access to run in the background as well as to draw over other apps, meaning the app can overlay any other application running on the device with its own window that can be used to steal credentials and additional sensitive information.

The functionality, according to Stefanko, is to trick users into falling for an adware or subscription scam.

Furthermore, in its current version, the malware code is capable of sending automatic replies only to WhatsApp contacts — a feature that could be potentially extended in a future update to other messaging apps that support Android's quick reply functionality.

While the message is sent only once per hour to the same contact, the contents of the message and the link to the app are fetched from a remote server, raising the possibility that the malware could be used to distribute other malicious websites and apps.

"I don't remember reading and analyzing any Android malware having such functionality to spread itself via whatsapp messages," Stefanko told The Hacker News.

Stefanko said the exact mechanism behind how it finds its way to the initial set of directly infected victims is not clear; however, it's to be noted the wormable malware can potentially expand from a few devices to many others incredibly quickly.

"I would say it could be via SMS, mail, social media, channels/chat groups etc," Stefanko told The Hacker News.

If anything, the development once again underscores the need to stick to trusted sources to download third-party apps, verify if an app is indeed built by a genuine developer, and carefully scrutinize app permissions before installation.

But the fact the campaign cleverly banks on the trust associated with WhatsApp contacts implies even these countermeasures may not be enough.


Logic bugs found in popular apps, including Signal and FB Messenger
21.1.2021 
Social  Securityaffairs

Flaws in popular messaging apps, such as Signal and FB Messenger allowed to force a target device to transmit audio to an attacker device.
Google Project Zero security researcher Natalie Silvanovich found multiple flaws in popular video conferencing apps such as Signal and FB Messenger, that allowed to force a target device to transmit audio of the surrounding environment to an attacker device.

I found logic bugs that allow audio or video to be transmitted without user consent in five mobile applications including Signal, Duo and Facebook Messenger https://t.co/PlB0PzLzjJ

— Natalie Silvanovich (@natashenka) January 19, 2021
The bugs are similar to a logic flaw discovered in January 2019 in Group FaceTime that allowed to hear a person’s audio before he answers,
The logic flaws affect Signal, Google Duo, Facebook Messenger, JioChat, and Mocha messaging apps, the good news is that they have been already fixed by the development teams.

“The ability to force a target device to transmit audio to an attacker device without gaining code execution was an unusual and possibly unprecedented impact of a vulnerability. Moreover, the vulnerability was a logic bug in the FaceTime calling state machine that could be exercised using only the user interface of the device.” reads the post published by Silvanovich. “While this bug was soon fixed, the fact that such a serious and easy to reach vulnerability had occurred due to a logic bug in a calling state machine — an attack scenario I had never seen considered on any platform — made me wonder whether other state machines had similar vulnerabilities as well. “

Most of video conferencing applications use WebRTC, while peers could establish WebRTC connections by exchanging call set-up information in Session Description Protocol (SDP), this process is called signalling.

In a typical WebRTC connection, the caller starts off by sending an SDP offer to the received, which in turn responds with an SDP answer.

The messages contain most information that is needed to transmit and receive media, including codec support, encryption keys and much more.

“Theoretically, ensuring callee consent before audio or video transmission should be a fairly simple matter of waiting until the user accepts the call before adding any tracks to the peer connection. However, when I looked at real applications they enabled transmission in many different ways. Most of these led to vulnerabilities that allowed calls to be connected without interaction from the callee.” continues the post.

The logical flaws also potentially allowed the caller to force a callee device to transmit audio or video data.

Silvanovich discovered that data is shared even if the receiver has not interacted with the application to answer the call.

Signal attack
Signal addressed the logical bug in the Android version in September 2019. “The application didn’t check that the device receiving the connect message was the caller device, so it was possible to send a connect message from the caller device to the callee. This caused the audio call to connect, allowing the caller to hear the callee’s surroundings”
JioChat (flaw in the Android app fixed in July 2020) and Mocha (flaw in the Android app fixed in August 2020). “This design has a fundamental problem, as candidates can be optionally included in an SDP offer or answer. In that case, the peer-to-peer connection will start immediately, as the only thing preventing the connection in this design is the lack of candidates, which will in turn lead to transmission from input devices. I tested this by using Frida to add candidates to the offers created by each of these applications. I was able to cause JioChat to send audio without user consent, and Mocha to send audio and video. Both of these vulnerabilities were fixed soon after they were filed by filtering SDP on the server.
Facebook Messenger addressed the bug in November 2020.
Google Duo solved the bug in December 2020.
“The majority of the bugs did not appear to be due to developer misunderstanding of WebRTC features. Instead, they were due to errors in how the state machines are implemented. That said, a lack of awareness of these types of issues was likely a factor. It is rare to find WebRTC documentation or tutorials that explicitly discuss the need for user consent when streaming audio or video from a user’s device.” concludes the expert.


Google Details Patched Bugs in Signal, FB Messenger, JioChat Apps
21.1.2021 
Social  Thehackernews
In January 2019, a critical flaw was reported in Apple's FaceTime group chats feature that made it possible for users to initiate a FaceTime video call and eavesdrop on targets by adding their own number as a third person in a group chat even before the person on the other end accepted the incoming call.

The vulnerability was deemed so severe that the iPhone maker removed the FaceTime group chats feature altogether before the issue was resolved in a subsequent iOS update.

Since then, a number of similar shortcomings have been discovered in multiple video chat apps such as Signal, JioChat, Mocha, Google Duo, and Facebook Messenger — all thanks to the work of Google Project Zero researcher Natalie Silvanovich.

"While [the Group FaceTime] bug was soon fixed, the fact that such a serious and easy to reach vulnerability had occurred due to a logic bug in a calling state machine — an attack scenario I had never seen considered on any platform — made me wonder whether other state machines had similar vulnerabilities as well," Silvanovich wrote in a Tuesday deep-dive of her year-long investigation.

How Signaling in WebRTC Works?
Although a majority of the messaging apps today rely on WebRTC for communication, the connections themselves are created by exchanging call set-up information using Session Description Protocol (SDP) between peers in what's called signaling, which typically works by sending an SDP offer from the caller's end, to which the callee responds with an SDP answer.

Put differently, when a user starts a WebRTC call to another user, a session description called an "offer" is created containing all the information necessary setting up a connection — the kind of media being sent, its format, the transfer protocol used, and the endpoint's IP address and port, among others. The recipient then responds with an "answer," including a description of its endpoint.

The entire process is a state machine, which indicates "where in the process of signaling the exchange of offer and answer the connection currently is."

Also included optionally as part of the offer/answer exchange is the ability of the two peers to trade SDP candidates to each other so as to negotiate the actual connection between them. It details the methods that can be used to communicate, regardless of the network topology — a WebRTC framework called Interactive Connectivity Establishment (ICE).

Once the two peers agree upon a mutually-compatible candidate, that candidate's SDP is used by each peer to construct and open a connection, through which media then begins to flow.

In this way, both devices share with one another the information needed in order to exchange audio or video over the peer-to-peer connection. But before this relay can happen, the captured media data has to be attached to the connection using a feature called tracks.

Messaging Apps
While it's expected that callee consent is ensured ahead of audio or video transmission and that no data is shared until the receiver has interacted with the application to answer the call (i.e., before adding any tracks to the connection), Silvanovich observed behavior to the contrary.

Multiple Messaging Apps Affected
Not only did the flaws in the apps allow calls to be connected without interaction from the callee, but they also potentially permitted the caller to force a callee device to transmit audio or video data.

The common root cause? Logic bugs in the signaling state machines, which Silvanovich said "are a concerning and under-investigated attack surface of video conferencing applications."

Signal (fixed in September 2019) - A audio call flaw in Signal's Android app made it possible for the caller to hear the callee's surroundings due to the fact that the app didn't check if the device receiving the connect message from the callee was the caller device.
JioChat (fixed in July 2020) and Mocha (fixed in August 2020) - Adding candidates to the offers created by Reliance JioChat and Viettel's Mocha Android apps that allowed a caller to force the target device to send audio (and video) without a user's consent. The flaws stemmed from the fact that the peer-to-peer connection had been set up even before the callee answered the call, thus increasing the "remote attack surface of WebRTC."
Facebook Messenger (fixed in November 2020) - A vulnerability that could have granted an attacker who is logged into the app to simultaneously initiate a call and send a specially crafted message to a target who is signed in to both the app as well as another Messenger client such as the web browser, and begin receiving audio from the callee device.
Google Duo (fixed in December 2020) - A race condition between disabling the video and setting up the connection that, in some situations, could cause the callee to leak video packets from unanswered calls.
Other messaging apps like Telegram and Viber were found to have none of the above flaws, although Silvanovich noted that significant reverse engineering challenges when analyzing Viber made the investigation "less rigorous" than the others.

"The majority of calling state machines I investigated had logic vulnerabilities that allowed audio or video content to be transmitted from the callee to the caller without the callee's consent," Silvanovich concluded. "This is clearly an area that is often overlooked when securing WebRTC applications."

"The majority of the bugs did not appear to be due to developer misunderstanding of WebRTC features. Instead, they were due to errors in how the state machines are implemented. That said, a lack of awareness of these types of issues was likely a factor," she added.

"It is also concerning to note that I did not look at any group calling features of these applications, and all the vulnerabilities reported were found in peer-to-peer calls. This is an area for future work that could reveal additional problems."


WhatsApp Delays Data Sharing Change After Backlash
19.1.2021 
Social  Securityweek

WhatsApp delays data sharing changes

WhatsApp on Friday postponed a data-sharing change as users concerned about privacy fled the Facebook-owned messaging service and flocked to rivals Telegram and Signal.

The smartphone app, a huge hit across the world, canceled its February 8 deadline for accepting an update to its terms concerning sharing data with Facebook, saying it would use the pause to clear up misinformation around privacy and security.

"We've heard from so many people how much confusion there is around our recent update," WhatsApp said in a blog post.

"This update does not expand our ability to share data with Facebook."

It said it would instead "go to people gradually to review the policy at their own pace before new business options are available on May 15."

The update concerns how merchants using WhatsApp to chat with customers can share data with Facebook, which could use the information for targeted ads, according to the social network.

"We can't see your private messages or hear your calls, and neither can Facebook," WhatsApp said in an earlier blog post.

"We don't keep logs of who everyone is messaging or calling. We can't see your shared location and neither can Facebook."

Location data along with message contents is encrypted end-to-end, according to WhatsApp.

"We're giving businesses the option to use secure hosting services from Facebook to manage WhatsApp chats with their customers, answer questions, and send helpful information like purchase receipts," WhatsApp said in a post.

"Whether you communicate with a business by phone, email, or WhatsApp, it can see what you're saying and may use that information for its own marketing purposes, which may include advertising on Facebook."

Technology experts note that WhatsApp's new requirement of its users makes legally binding a policy that has been widely in use since 2016.

Facebook aims to monetize WhatsApp by allowing businesses to contact clients via the platform, making it natural for the internet giant to centralize some data on its servers.

- Countries concerned -

The Turkish Competition Authority said it is opening an investigation and requiring WhatsApp to suspend the data sharing obligation on its users.

Several Turkish state organizations -- including President Recep Tayyip Erdogan's media office -- switched to Turkcell telecom's new messaging service BiP in response.

The terms of service tweak also put WhatsApp in the crosshairs in Italy and India, where a petition has been filed in a Delhi court.

WhatsApp's notice to users lacked clarity and its privacy implications need to be carefully evaluated, Italian data protection agency GPDP said in a post at its website.

GPDP said it has shared its concerns with the European Data Protection Board and reserved the right to intervene in the matter.

Facebook has come under increasing pressure from regulators as it tries to integrate its services.

The EU fined the US social media giant 110 million euros (then $120 million) for providing incorrect and misleading information about its 2014 takeover of WhatsApp concerning the ability to link accounts between the services.

Federal and state regulators in US have accused Facebook of using its acquisitions of WhatsApp and Instagram to squelch competition and filed antitrust lawsuits last month that aim to force the company to divest them.

- Privacy paramount -

User privacy fears have been mounting, with Uber careful to stress that a change in app terms taking effect on January 18 has nothing to do with sharing data.

Encrypted messaging app Telegram has seen user ranks surge on the heels of the WhatsApp service terms announcement, said its Russia-born founder Pavel Durov.

"People no longer want to exchange their privacy for free services," Durov said without directly referring to the rival app.

Encrypted messaging app Signal has also seen a huge surge in demand, helped by a tweeted recommendation by billionaire tech entrepreneur Elon Musk.

WhatsApp has sought to reassure worried users, even running full-page newspaper adverts in India, proclaiming that "respect for your privacy is coded into our DNA".


WhatsApp Delays Controversial 'Data-Sharing' Privacy Policy Update By 3 Months
17.1.2021 
Social  Thehackernews
WhatsApp said on Friday that it wouldn't enforce its recently announced controversial data sharing policy update until May 15.

Originally set to go into effect next month on February 8, the three-month delay comes following "a lot of misinformation" about a revision to its privacy policy that allows WhatsApp to share data with Facebook, sparking widespread concerns about the exact kind of information that will be shared under the incoming terms.

The Facebook-owned company has since repeatedly clarified that the update does not expand its ability to share personal user chats or other profile information with Facebook and is instead simply providing further transparency about how user data is collected and shared when using the messaging app to interact with businesses.

"The update includes new options people will have to message a business on WhatsApp, and provides further transparency about how we collect and use data," WhatsApp said in a post.

"While not everyone shops with a business on WhatsApp today, we think that more people will choose to do so in the future and it's important people are aware of these services. This update does not expand our ability to share data with Facebook."

On January 6, WhatsApp began alerting its 2 billion users of a new privacy policy and terms as part of its broader efforts to integrate WhatsApp better with other Facebook products and amidst its plans to transform WhatsApp into a commerce and business services provider.

facebook whatsapp privacy policy
Under the proposed terms — which are about how businesses manage their chats on WhatsApp using Facebook's hosting services — WhatsApp would share additional data with Facebook such as phone number, service-related information, IP address, and transaction data for those who use the business chat feature.

The pop-up notification also gave users an ultimatum to accept the new policy by February 8 or risk losing their ability to use the app altogether.

The confusion surrounding the update, coupled with no other option to disagree beyond shutting down the account, has led to further scrutiny in India, Italy, and Turkey, not to mention an exodus of users to privacy-focused messaging competitors such as Signal and Telegram.

In the intervening days, Signal has become one of the most downloaded apps on Android and iOS, in part boosted by a tweet from Tesla CEO Elon Musk, who urged his followers to "Use Signal." Earlier this week, Telegram said that it surpassed the 500 million active user mark, gaining over 25 million new users worldwide in 72 hours.

It's worth noting that WhatsApp has in fact shared some user account information with Facebook since 2016, such as phone numbers, except for those who opted out of the sharing when it revamped the privacy policy that year and gave users a one-time ability not to have their account data turned over to Facebook.

WhatsApp, in a separate FAQ published this week, tried to set the record straight by stressing that it "cannot see your personal messages or hear your calls, and neither can Facebook," and that it does not share users' contacts and location information to its parent company.

With the company walking back some of its previous messaging, it remains to be seen if the extra time will help it tide over the controversy and "clear up the misinformation around how privacy and security works on WhatsApp."


Signal is down for multiple users worldwide
16.1.2021 
Social  Securityaffairs

The popular signal messaging app Signal is currently facing issues around the world, users are not able to make calls and send/receive messages.
At the time of this writing, it is not possible to make calls and send/receive messages.

Users that attempted to send messages via the messaging app were seeing loading screen and after it displayed an error message “502”.

Immediately after WhatApp announced the changes to its privacy policy and obliged its users to accept it to continue using its service, a huge number of users opted to leave the Facebook-owned platform.

As usual, in order to verify the availability of the service, it is possible to visit the Downdetector website.

Signal downdetector
Some users also claim problems while logging in.

“Signal is experiencing technical difficulties. We are working hard to restore service as quickly as possible.” read a message posted by Signal on Twitter.

Signal announced it is adding new servers and extra capacity at a record pace every single day this week to provide the service to a growing number of users.

“A recent data point from Sensor Tower claimed that Signal app was downloaded by 17.8 million users in almost a week. On Thursday, the messaging app announced that it touched 50 million downloads on the Android platform.” wrote BusinessInsider. “Signal has already got plenty of marketing thanks to Elon Musk, Edward Snowden, and more prominent people worldwide.”

The company is still working to resolve the issue, you can follow the progresses on the service status page.


Facebook Takes Legal Action Against Data Scrapers
16.1.2021 
Social  Securityweek

Facebook on Thursday announced that it took legal action against two individuals for scraping data from its website.

In a lawsuit filed in Portugal, Facebook Inc. and Facebook Ireland seek permanent injunction against the two for violation of the social media platform’s terms of service and Portugal’s Database Protection Law.

The social media giant says that the two created browser extensions that they made available for download through the Chrome Web Store. The extensions were being offered using the business name “Oink and Stuff.”

A privacy policy that accompanied these extensions claimed that no collection of personal information would be performed.

This, Facebook says, was misleading, as four of the extensions were found to contain spyware code, namely Web for Instagram plus DM, Blue Messenger, Emoji keyboard, and Green Messenger.

The code was meant to scrape users’ information from the Facebook website, but could also harvest additional data from the users’ browsers unrelated to the social platform, all without notifying the victims on the matter, the company reveals.

Data harvested from the Facebook website includes name, user ID, gender, relationship status, and age group, along with other account information.

In addition to seeking a permanent injunction against the two individuals, the social media platform is demanding that they delete all of the Facebook data they harvested.

“This case is the result of our ongoing international efforts to detect and enforce against those who scrape Facebook users’ data, including those who use browser extensions to compromise people’s browsers,” Facebook concludes.

Facebook previously took legal action against entities in the U.S., Israel and Ukraine over data scraping.


Telegram-Based Automated Scam Service Helps Fraudsters Make Millions
16.1.2021 
Social  Securityweek

More than 40 scammer groups are actively engaged in schemes leveraging a scam-as-a-service offering that provides users the tools and resources needed to conduct fraud, according to threat hunting and intelligence company Group-IB.

The automated scam service has been named Classiscam by Group-IB and it’s meant to help cybercriminals steal money and payment data from unsuspecting victims, through the use of fake pages mimicking those of legitimate classifieds, marketplaces and delivery services.

The Classiscam scheme is powered by Telegram chatbots, which generate a complete phishing kit, including courier URL, payment, and refund information. The chatbots also offer shops, where users can purchase accounts to marketplaces, manuals, e-wallets, mailings, and even lawyers.

Simple and straightforward, the scheme has gained a lot of popularity, with over 5,000 scammers registered in the 40 most popular Telegram chats by the end of 2020.

More than 20 threat actors are believed to be leveraging the scheme in Russia, with over 20 other groups operating in the United States, Bulgaria, Romania, the Czech Republic, France, Poland, and multiple post-Soviet countries.

Classiscam emerged in Russia in 2019, but peak activity was recorded last year, amid the switch to telework due to the Coronavirus pandemic. In 2020, the threat groups made in excess of $6.5 million, or approximately $520,000 per month, at an average of $61,000 per month/per group (although the proceeds may differ from one group to another).

Some of the popular international classifieds and marketplaces abused by these scammers include Allegro, OLX, Sbazar and Leboncoin.

The scheme also exploits delivery brands, including DHL and Romanian delivery service FAN Courier, and security researchers have spotted underground forum chats suggesting that new brands will soon be used, such as FedEx and DHL Express in the US and Bulgaria.

The scheme starts with bait ads published on popular classified websites and marketplaces, offering various items at deliberately low prices. The threat actors, which pose both as sellers and buyers, use local phone numbers and lure victims into discussing deals over a third-party messaging app.

Victims are then asked for their contact information for delivery, and are provided with a link that takes them either to a fake courier service website or a scam page with a payment form. Thus, the scammers harvest payment data or withdraw money through fake merchant websites. In other instances, the scammers pose as buyers and send fake payment forms mimicking a popular marketplace.

“Although many marketplaces and classifieds that sell new and used goods have an active policy of protecting users from fraudsters by posting warnings on their resources, victims continue to give away their data,” Group-IB notes.

The scammer groups have a pyramidal hierarchy, with topic starters placed on top. These individuals are responsible for recruitments, creating scam pages and registering accounts, as well as for providing assistance when transactions are blocked.

The topic starters’ get a share of 20-30% of the stolen funds, while the workers, which engage with the victim and send the URLs to scam pages, get the rest. Successful workers move to the top, getting access to VIP options and to more lucrative markets.


Telegram Bots at Heart of Classiscam Scam-as-a-Service
15.1.2021 
Social  Threatpost

The cybercriminal service has scammed victims out of $6.5 million and continues to spread on Telegram.

A new automated scam-as-a-service has been unearthed, which leverages Telegram bots in order to steal money and payment data from European victims.

The scam, which researchers call Classiscam, is being sold as a service by Russian-speaking cybercriminals, and has been used by at least 40 separate cybergangs – which altogether made at least $6.5 million using the service in 2020.

These groups have bought into full-fledged scam kits, equipping them with Telegram chatbots for automated communication with victims, as well as customized webpages that lead victims to phishing landing pages. These are all the tools needed to scam victims out of money – when in reality, the victims think they are buying online products.

2020 Reader Survey: Share Your Feedback to Help Us Improve

“Group-IB discovered at least 40 groups leveraging Classiscam, with each of them running a separate Telegram chat-bot,” said researchers with Group-IB, in a Thursday analysis. “At least 20 of these groups focus on European countries. On average, they make around $61,000 monthly, but profits may differ from group to group. It is estimated that all 40 most active criminal groups make $522,000 per month in total.”

The Scam
First, the cybercriminals who have bought these kits publish “bait ads” on popular marketplaces and classified websites, such as French classifieds site Lebencoin or German logistics industry giant DHL. Products such as cameras, game consoles, laptops or smartphones are posted at deliberately low prices.

If a victim contacts the seller, they are asked to continue communicating through a third-party messenger app, either WhatsApp or Telegram. If these communications occur via Telegram, the ploy uses Telegram chat bots. According to Telegram, bots are Telegram accounts operated by software – not people – that will often have artificial-intelligence features.

A Classiscam scam in action. Credit: Group-IB

The cybercriminals behind the ploy merely need to send a link with the bait product to the Telegram chatbot, which then generates a complete phishing kit.

Digging deeper, the phishing kit includes a link to either a fake popular courier service website, or a scam website that mimics a classified or a marketplace with a payment form, which is actually a scam page. A “refund” page meanwhile offers fake support lines for victims to call if they have realized they have been scammed; the “tech support team” is actually a member of the cybercriminal gang using the service.

“As a result, the fraudster obtains payment data or withdraws money through a fake merchant website,” said researchers. “Another scenario involves a scammer contacting a legitimate seller under the guise of a customer and sending a fake payment form mimicking a marketplace and obtained via Telegram bot, so that the seller could reportedly receive the money from the scammer.”

The Service
The hierarchy of the gangs behind the scam works in a pyramid, said researchers – admins at the top are responsible for recruiting members and creating scam pages and new accounts. Below them, workers communicate with victims and send them phishing URLs, while others pose as tech-support specialists who talk to victims about their “refunds.”

“Scammers are making their first attempts in Europe, [and] an average theft costs users about $120,” said researchers. “The scam was localized for the markets of Eastern and Western Europe.”

Researchers said “the scheme is simple and straightforward, which makes it all the more popular.” The use of Telegram bots plays into its growing popularity, they said. Telegram recently saw a surge in new users after WhatsApp came under criticism for its privacy policies.

Researchers said that more than 5,000 scammers were registered in 40 most popular Telegram chats by the end of 2020, showing that the ploy continues to grow on the Telegram platform.

Threatpost has reached out to Telegram for comment.


Facebook: Malicious Chrome Extension Developers Scraped Profile Data

15.1.2021  Social  Threatpost
Facebook has sued two Chrome devs for scraping user profile data – including names, user IDs and more.

Facebook has filed legal action against two Chrome extension developers that the company said was scraping user profile data – including names and profile IDs – as well as other browser-related information.

The two unnamed developers under the business name Oink and Stuff, developed Chrome malicious browser extensions, which actually contained hidden code “that functioned like spyware,” alleges Facebook.

The four malicious extensions include: Blue Messenger, which bills itself as a notification alert app for Facebook’s Messenger communications feature; Green Messenger, which is a messenger app for WhatsApp; Emoji Keyboard, a shortcut keyboard app and Web for Instagram plus DM, which offers tools for users to direct message others on the Instagram app.

2020 Reader Survey: Share Your Feedback to Help Us Improve

The Oink and Stuff developers “misled users into installing the extensions with a privacy policy that claimed they did not collect any personal information,” Jessica Romero, director of platform enforcement and litigation with Facebook, said in a Thursday post.

In its Chrome extension webpage description for Web for Instagram plus DM, for instance, the company says: “We don’t store, access, transmit or share any sensitive or user private information.”

On its website, Oink and Stuff claims that it has more than 1 million active users and said it was founded in 2014. The company offers extensions for Chrome, Firefox, Opera and Microsoft Edge (as well as Android apps offered. via Google Play. It’s not clear if extensions offered on these other browsers were found to be malicious.

Several of the extensions offered by the company (including Green Messenger and Blue Messenger) appear to still be available on various marketplaces including Chrome and Google Play. Threatpost has reached out to Google for further comment.

When Facebook users installed these extensions on their browsers, they were actually installing the concealed code, designed to scrape their Facebook data, according to Facebook. If users visited Facebook’s website, for instance, the browser extensions were programmed to scrape their name, user ID, gender, relationship status, age group and other information related to their account.

“The defendants did not compromise Facebook’s security systems,” clarified Romero. “Instead, they used the extensions on the users’ devices to collect information.”

The extensions also scraped information from unknowing users’ browsers that was unrelated to Facebook. Facebook did not clarify what this data was. Facebook also did not say how many users were affected.

Facebook Inc. and Facebook Ireland filed the legal action, in Portugal, saying the two developers violated the social media giant’s Terms of Service and Portugal’s Database Protection Law, according to Facebook.

The company is seeking a permanent injunction against the two, and demanding that they delete “all Facebook data in their possession.”

Data scraping is a challenge that Facebook continues to grapple with, starting in the wake of the Cambridge Analytica scandal, in which Facebook allowed a third-party application to scrape and then hand over the data of up to 50 million platform users to the company.

In 2018, Facebook CEO Mark Zuckerberg said millions of users of the social network may have had their data scraped by malicious actors using a reverse search tool. In March 2019, Facebook sued two Ukrainian men that it said used quiz apps and malicious browser extensions to scoop up private data from 63,000 platform users, and then used that data for advertising purposes.

“This case is the result of our ongoing international efforts to detect and enforce against those who scrape Facebook users’ data, including those who use browser extensions to compromise people’s browsers,” said Romero.


TikTok Takes Teen Accounts Private
14.1.2021 
Social  Threatpost

The company announced accounts for ages 13-15 will default to privacy setting, among other safety measures.

TikTok has decided to boost privacy measures for its underage users, the popular video-sharing social-media company announced.

TikTok’s popularity is being driven by teens — the company reported in 2019 about 60 percent of its 26.5 monthly users are between the ages of 16 and 24, and these latest measures are an attempt to make the platform safer for its youngest users, according to the company. TikTok is owned by Chinese company ByteDance.

2020 Reader Survey: Share Your Feedback to Help Us Improve

“Starting today, we’re changing the default privacy setting for all registered accounts ages 13-15 to private,” the statement said. “With a private TikTok account, only someone who the user approves as a follower can view their videos. We want our younger users to be able to make informed choices about what and with whom they choose to share, which includes whether they want to open their account to public views. By engaging them early in their privacy journey, we can enable them to make more deliberate decisions about their online privacy.”

TikTok Privacy Settings
Additional changes for the under-18 TikTok crowd include limiting comments on videos created by users 13-15; limiting Duet and Stich to only users over 16; changing the default setting for Duet and Stitch to “friends” for 16- and 17-year-old users; and prohibiting downloads of videos by users under 16.

Duet allows a user you to build on another user’s video by recording their own video alongside the original as it plays. Stitch meanwhile allows users the ability to clip and integrate scenes from another user’s video into their own.

The “suggest your account to others” option will also be set to off by default for 13-15-year-old users., the company added.

A limited TikTok app for users under 13 launched last year, will now partner with parent watchdog group Common Sense to deliver appropriate videos for younger TikTok-ers.

The moves are being applauded by the National PTA and online safety watchdog groups across the spectrum.

Watchdog Groups Applaud
“National PTA applauds TikTok for advancing safe and age-appropriate experiences where teens can have fun and be creative,” Leslie Boggs, president of the National PTA, said in the statement about the move. “With TikTok’s thoughtful changes to teens’ privacy settings, National PTA continues to recommend that families sit down together, explore the app’s safety controls and tools, and have open and ongoing conversations to help teens be safe and responsible online. This is particularly important to ensure teens’ accounts are set up right from the start. PTA looks forward to continuing our important work with TikTok to educate families across the country about online safety.”

The privacy-positive moves come in the wake of harsh criticism of the app and its approach to privacy. Last August for instance, the Trump administration issued an Executive Order calling the app a “threat.”

“TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories,” the E.O. said. “This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of federal employees and contractors, build dossiers of personal information for blackmail and conduct corporate espionage.”

A plan to cut off access to TikTok in the U.S. was abandoned at the last minute last September, after ByteDance agreed to sell off a big stake in ownership to Oracle and Walmart.

Besides privacy concerns, experts have pointed out that the app is plagued by security flaws. But the move to protect teens on the beleaguered, yet wildly popular, app is drawing a positive reaction.

“Putting these new measures in place is another positive step forward in TikTok’s safety and privacy efforts,” Stephen Balkam, CEO, Family Online Safety Institute said. “Thinking ahead about what is appropriate for teens of different ages creates an opportunity for these younger users to learn and grow responsibly on the platform, and serves as an important teachable moment when they do gain those abilities.”


WhatsApp Stresses Privacy as Users Flock to Rivals
14.1.2021 
Social  Securityweek

WhatsApp on Tuesday reassured users about privacy at the Facebook-owned messaging service as people flocked to rivals Telegram and Signal following a tweak to its terms.

There was "a lot of misinformation" about an update to terms of service regarding an option to use WhatsApp to message businesses, Facebook executive Adam Mosseri, who heads Instagram, said in a tweet.

WhatsApp's new terms sparked criticism, as users outside Europe who do not accept the new conditions before February 8 will be cut off from the messaging app.

"The policy update does not affect the privacy of your messages with friends or family in any way," Mosseri said.

The update regards how merchants using WhatsApp to chat with customers can share data with Facebook, which could use the information for targeting ads, according to the social network.

"We can't see your private messages or hear your calls, and neither can Facebook," WhatsApp said in a blog post.

"We don't keep logs of who everyone is messaging or calling. We can't see your shared location and neither can Facebook."

Location data along with message contents is encrypted end-to-end, according to WhatsApp.

"We're giving businesses the option to use secure hosting services from Facebook to manage WhatsApp chats with their customers, answer questions, and send helpful information like purchase receipts," WhatsApp said in the post.

"Whether you communicate with a business by phone, email, or WhatsApp, it can see what you're saying and may use that information for its own marketing purposes, which may include advertising on Facebook."

- Tapping Telegram -

Encrypted messaging app Telegram has seen user ranks surge on the heels of the WhatsApp service terms announcement, said its Russia-born founder Pavel Durov.

Durov, 36, said on his Telegram channel Tuesday that the app had over 500 million monthly active users in the first weeks of January and "25 million new users joined Telegram in the last 72 hours alone."

WhatsApp boasts more than two billion users.

"People no longer want to exchange their privacy for free services," Durov said without directly referring to the rival app.

Encrypted messaging app Signal has also seen a huge surge in demand, helped by a tweeted recommendation by renowned serial entrepreneur Elon Musk.

In India, WhatsApp's biggest market with some 400 million users, the two apps gained around 4 million subscribers last week, financial daily Mint reported, citing data from research firm Sensor Tower.

WhatsApp has sought to reassure worried users in the South Asian country, running full-page adverts in Wednesday's newspapers, proclaiming that "respect for your privacy is coded into our DNA".

Telegram is a popular social media platform in a number of countries, particularly in the former Soviet Union and Iran, and is used both for private communications and sharing information and news.

Durov said Telegram has become a "refuge" for those seeking a private and secure communications platform and assured new users that his team "takes this responsibility very seriously."

Telegram was founded in 2013 by brothers Pavel and Nikolai Durov, who also founded Russia's social media network VKontakte.

Telegram refuses to cooperate with authorities and handover encryption keys, which resulted in its ban in several countries, including Russia.

Last year, Russia announced that it will lift its ban on the messenger app after more than two years of unsuccessful attempts to block it.


Post-Backlash, WhatsApp Spells Out Privacy Policy Updates

13.1.2021  Social  Threatpost

WhatsApp aimed to clear the air about its updated privacy policy after reports of mandatory data sharing with Facebook drove users to Signal and Telegram in troves.

WhatsApp is making explicit clarifications around its updated privacy policy, after reports ran amok of the messaging app mandating all-encompassing data sharing with parent company Facebook.

The messaging app’s new privacy policy and terms of service, which will go into effect Feb. 8, will share certain data with Facebook, along with other Facebook products. However, the updates announced last week sparked widespread ire from users, who feared WhatsApp would mandate private user data to be shared with Facebook – and caused a mass exodus from the app onto competing apps, including Telegram and Signal.

This week, WhatsApp in a new privacy policy FAQ posted to its website aimed to dispel myths that all user data – across the board – would be shared with Facebook. The updated privacy policies, it argued, are instead related to the data collection of WhatsApp users who message businesses on the platform.

“We want to be clear that the policy update does not affect the privacy of your messages with friends or family in any way,” according to WhatsApp. “Instead, this update includes changes related to messaging a business on WhatsApp, which is optional, and provides further transparency about how we collect and use data.”

However, WhatsApp stressed that neither WhatsApp – nor Facebook – can see users’ private messages or hear their calls. Similarly, WhatsApp (and Facebook) doesn’t keep logs of who everyone is messaging or calling.

WhatsApp also said in its privacy policy FAQ that it can’t see shared location of users; however, in a more detailed look at its privacy policy (under “Location Information”), the company says: “We collect and use precise location information from your device with your permission when you choose to use location-related features, like when you decide to share your location with your contacts or view locations nearby or locations others have shared with you.”

Threatpost has reached out to WhatsApp for clarification regarding this discrepancy.

WhatsApp Business Privacy Policy
According to WhatsApp’s privacy policy, the data shared between WhatsApp and Facebook products aims to improve WhatsApp’s infrastructure and delivery systems; help understand how various services are used, and provide further integrations between various products. As part of the new privacy policy, businesses that operate using WhatsApp as a communication method now have the option to utilize Facebook hosting services, said WhatsApp.

“Whether you communicate with a business by phone, email, or WhatsApp, it can see what you’re saying and may use that information for its own marketing purposes, which may include advertising on Facebook,” said WhatsApp.

Another new data-sharing policy makes use of Facebook’s commerce feature, Shops, which lets users buy or sell goods. Businesses can display their goods on WhatsApp utilizing Shops – and if they do so, in WhatsApp, WhatsApp users’ shopping activity can be used to personalize ads on Facebook and Instagram.

“Features like this are optional and when you use them we will tell you in the app how your data is being shared with Facebook,” said WhatsApp.

Finally, WhatsApp said that “message” buttons for messaging a business using WhatsApp are shared with Facebook – “Facebook may use the way you interact with these ads to personalize the ads you see on Facebook,” said WhatsApp.

Beyond these newer updates, however, according to WhatsApp’s privacy policy webpage, it’s worth noting that WhatsApp currently shares “certain categories” of data with Facebook Companies. The Facebook Companies lineup includes Facebook Payments, Facebook-owned Israeli mobile web analytics company Onavo, Facebook Technologies LLC and Facebook Technologies Ireland Limited, and content delivery and social monitoring platform CrowdTangle.

“The information we share with the other Facebook Companies includes your account registration information (such as your phone number), transaction data, service-related information, information on how you interact with others (including businesses) when using our Services, mobile device information, your IP address, and may include other information identified in the Privacy Policy section entitled ‘Information We Collect’ or obtained upon notice to you or based on your consent,” according to WhatsApp.

Regardless of WhatsApp’s clarifications, the public reaction to WhatsApp’s change in data privacy policies is likely due to the mistrust people have of Facebook and its rocky track record when it comes to privacy, Hank Schless, senior manager of Security Solutions at Lookout, told Threatpost.

“WhatsApp is doing the right thing by explaining the policy changes in plain language and acknowledging the importance of transparent data sharing and app permission policy,” Schless told Threatpost. “It’s going to be a challenge for WhatsApp to win users back who have already made the decision to move to other messaging apps, but their transparency is the right first step.”

WhatsApp Privacy Policy
Beyond its data-sharing with other companies, WhatsApp’s privacy policy on its website breaks down the data that is automatically collected by the company.

These include the shared location information mentioned above. “Even if you do not use our location-related features, we use IP addresses and other information like phone number area codes to estimate your general location (e.g., city and country),” according to WhatsApp. “We also use your location information for diagnostics and troubleshooting purposes.”

WhatsApp also collects data about user activity on its services – including diagnostic and performance information. This includes the features that users utilize, including messaging, calling, Status, groups (including group name, group picture, group description), payments or business features.

“This includes information about your activity (including how you use our Services, your Services settings, how you interact with others using our Services (including when you interact with a business), and the time, frequency, and duration of your activities and interactions), log files, and diagnostic, crash, website, and performance logs and reports,” according to WhatsApp.

The app also uses cookies, as well as device and connection-specific information (including hardware model, operating system information, battery level, signal strength, app version, browser information and mobile network).

Above all, “this incident shows that data privacy is now top-of-mind for the general public,” said Schless. “It also illustrates the importance of understanding how mobile apps collect and use your data. Looking forward in 2021, increased awareness around data privacy will drive changes in how consumers and organizations alike think about data sharing within mobile apps.”


Data collection cheat sheet: how Parler, Twitter, Facebook, MeWe’s data policies compare
13.1.2021 
Social  Securityaffairs

CyberNews researchers analyzed data from multiple social platforms like Parler, Twitter, Facebook, MeWe’s to compare data policies.
Original Post at https://cybernews.com/privacy/how-parler-twitter-facebook-mewe-data-policies-compare/

Alternative social media platforms, also known as “alt” or alt-tech, were catapulted into the spotlight near the end of 2020 due to US President Donald Trump’s claims of election interference.

Twitter-alternative Parler in particular is in the spotlight after being banned from Google’s Play store and Apple’s App Store. Its hosting provider, Amazon Web Services, has also removed the platform from its services, meaning that at this moment, Parler’s platform is inaccessible.

To make matters even worse for the platform, a security researcher was able to collect more than 70 terabytes, which equals 70,000 gigabytes, of Parler users’ messages, videos, audio, and all other activity. Due to this breach, it will be important to see whether promises made in Parler’s privacy policy will hold up with the data it actually collected and maintained in its servers.

While these alt platforms largely position themselves as “free speech” alternatives, we at CyberNews were also interested in how these alt social platforms compare in terms of data collection.

Therefore, for this research, we aimed to see how the mainstream platforms compare to their logical alt pairings:

Twitter and Parler
Facebook and MeWe
Twitter and Parler
YouTube and Rumble
Reddit and Voat (offline)
Tiktok and Triller
As of this writing, Voat has been taken offline, apparently after an investor backed out in March, and Parler is inaccessible while it searches for hosting alternatives.However, our investigation will include their analysis as well.

The biggest takeaway? Mainstream social platforms collect more data at the moment than alt-social platforms, but that is likely because mainstream social platforms have already reached their stable monetization phase and are selling ads. Only one alt-social platform, MeWe, makes promises to never sell ads.

Highlights

Here are the biggest takeaways from analyzing these 10 social platforms:

Parler is the only platform that asks for a government-issued ID to verify its users’ general accounts (although unverified accounts can interact limitedly on the platform). While most platforms state they will disclose personal information in response to legal requests, Parler will also disclose information “for the avoidance of doubt” if the user posts “objectionable content”
Parler, Reddit, Voat, Triller and TikTok (US) do not provide clear data retention policies, including how long they retain data after it has been deleted by the user
Triller is the only social platform that outsources all messaging functionality to a third party service provider, Quickblox. Users would need to read both Triller’s and Quickblox’ privacy policies to get a good idea of how their data is being collected and processed.
Triller ignores Do Not Track requests, a practice it claims is similar for “many websites and online services”
Mainstream social platforms have data collection policies that are 6605 words in length on average, which would take roughly 50 minutes to read.
Alt-social platforms’ policies are 4420 words in length on average, taking roughly 34 minutes to read.
Facebook explicitly states that it collects data on users, including device and activity information, even if they don’t have an account
The alt-social platforms don’t have an easy way for users to download all the data the platforms have on them. However, neither does TikTok, which tells users to send written “requests” to access their data
Facebook and Twitter data collection policies do not have explicit sections or statements dedicated to security
Along with the standard ways that these platforms collect and use the user’s data, both YouTube (Google) and TikTok also use publicly available information online to build a user’s profile on their platform
TikTok makes 47 requests, the most of all platforms, when the Android app is launched, while Parler makes only 2
How this data was collected and processed

In order to undertake this research, we analyzed all the data collection policies for a given platform. For most, we could get a comprehensive view of their data collection practices from the primary data collection document – their privacy policy.

However, others required analyzing in addition their relative Terms of Use/Service document, and others, such as YouTube (Google) and Facebook, required even more documents.

Besides analyzing the text, we also looked at word length for the given documents and the average reading time and difficulty of text. We also checked how many requests each platform’s app makes when it is launched.

A common framework

We took a common framework for analyzing privacy policies, which consists of the following sections (adapted for this research):

First party collection or use
Third party sharing or collection
User choice and control
User access, edit and delete
Data retention
Security
We then looked at each platform’s primary data collection document, its privacy policy. In cases when the privacy policy did not provide a good overview of its data collection practices, we looked at supporting documents like its Terms of Use, and other platforms required even more document analysis.

When possible, we looked at the US versions of these data collection documents.

Keeping it simple

In order to keep the analysis clear, we assessed each practice based on a three-point scale:

Bad
OK
Good
Therefore, while cookie collection would get an “OK” in terms of first party collection, not having a clear data retention policy would get a “Bad.” Having a section dedicated to security would get a “Good” (unless the section is useless by containing no information at all).

There are two important considerations to make:

These privacy policies are assessed based on an average user having a “good idea” of the specific platform’s data collection policies, which in an optimal case means the average reader would need to read the policy only once
Some privacy policies, like Voat’s, are extremely sparse. However, just because Voat does not state that it collects, for example, user generated content, does not mean that it does not collect that data. In cases like these, we have to use common sense and not merely what’s stated in the data collection policies.
For ease of understanding the differences between mainstream and alt social platforms, we’ll analyze them in their most logical pairs:

Facebook and MeWe
Twitter and Parler
YouTube and Rumble
Reddit and Voat (offline)
Tiktok and Triller
Common sense analysis

When looking at the varying sections, it’s important that we apply practical or common sense to the analyses.

For “First party collection and use,” the less data collected, the better it is. However, it’s logical for any social media platform to collect the following data:

Account creation information
Engagement activity
User generated content (UGC) and metadata
Messaging (although optimally this would be end-to-end encrypted)
Feature-related data (related to camera, microphone, etc.)
Device information
The major difference then would be how much of the different types of data they collect, as well as any other interesting data collection practices.

For “Third party sharing,” the less data shared, the better it is. However, it is expected that platforms will have service providers, such as hosting, and marketing and statistics, such as Google Analytics. They will share data if legally required, and send payment information to a third party if payments occur on their sites.

For “User choice and control,” users should be able to control their account’s privacy settings, who gets to see their content, and have opt outs for ads or other tracking.

For “User access, edit and delete,” users should be able to easily edit, update, retrieve or delete their accounts. They should also be able to easily delete their UGC. Optimally, they will be able to easily download all their account data.

For “Data retention,” it is expected that data will not be deleted immediately. However, platforms should state how long data is stored after a delete request.

For “Security,” we are not assessing the security of the particular platform. We are only looking at whether a platform discusses security-related issues, such as security measures used or breach notifications.

Apple’s App Store privacy labels

Apple recently introduced privacy labels to its App Store which helps to show what kind of data is being collected by apps. These are done in three different categories:

Data Linked to You
Data Used to Track You
Data Not Linked to You
We checked the data points being collected by the five mainstream and five alt social platforms by doing a simple count of the total number of data points. We were able to collect data on Parler before it was removed from the App Store:

App data collection according to the App Store
One thing that’s clear from this data: Facebook’s data collection eclipses most other mainstream social media platforms, and especially alt social platforms.

One important thing to note however is that this data is self-reported, and it explicitly states that Apple has not reviewed these:

Example for MeWe
This could lead answers for some interesting insights, such as Rumble apparently collecting no data on its iOS users. Furthermore, some apps like YouTube have not yet reported their data handling:

Tedium at a glance: average lengths and times

We totaled the word counts or all documents that a user would have to read in order to get a “good idea” of a platform’s data collection policies. For some platforms, like Facebook, this includes three separate documents, while for most platforms this included only the privacy policy.

Some platforms, like TikTok, included multiple versions of the privacy policies within one document, so we only counted length and time for the US version of the privacy policy.

Average reading time was calculated using Grammarly’s Words to Time tool.

As you can see, Facebook, YouTube and Triller had the highest lengths and reading times. What is interesting, however, is that for Facebook and YouTube, this is made up of multiple documents. However, Triller’s word count and average reading time come from just one document.

With the exception of Triller, all alt social platforms had lower word counts and reading times.

Text difficulty: English vs Legalese

We measured the difficulty of the text using Flesch-Kincaid readability tests, which score difficulty from 0 – extremely difficult, understood by university graduates — to 100 – extremely easy, understood by an average 11-year old — so that a text with a higher score is easier to read. For social platforms with more than one text, we took the average.

We noticed that all of the social platforms, regardless of length, scored within the 30-50 range, are difficult to read and normally require a college degree to fully understand:

Rumble had the most challenging text, coming in at 36.6, and YouTube (Google) had the easiest text, coming in at 50.3.

Network request for each platform’s app

Lastly, we checked the network requests that these platforms’ mobile apps made immediately when the app was first launched (with no further interaction). Generally, the more network requests an app makes, the more data is being sent from your device to the platform.

Note that Voat had no mobile app to analyze:

TikTok had the most network requests on app launch (47), while Parler had the least with 2. In general, alt social platforms had fewer requests than their mainstream counterparts.

Comparing the social platforms

We will compare each pair of social platforms (the mainstream version and the alt version) and highlight interesting or noteworthy aspects of their various data collection policies.

We rank each platform based on how well they perform in the specified categories, and at the end give a summary of the comparison and a final ranking.

Twitter and Parler

Parler is possibly the most popular alt social platform for conservatives and conspiracy theorists, with a look and style much like Twitter. Parler was said to have 10 million users (4 million active) as of November 2020.

A false image circulated showing US President Donald Trump officially moving to Parler after he was temporarily suspended from Facebook and Twitter following posts that incited the US Capitol riots. After the riots, Parler was removed from multiple online services, including Google’s and Apple’s app stores, Amazon’s hosting, Twilio’s authentication, and others. At the moment, the alt social platform is inaccessible.

Twitter Parler
Document [1] [1]
Words 5549 2157
Reading time 83.8 16.6
Reading ease 46.3 46.2
Network requests 9 2
First party collection and use

Twitter: OK
Parler: bad
Twitter, for the most part, collects the standard personal information, content and device information. Twitter collects not only the search terms you submitted, but the ones you didn’t submit (typed, but didn’t hit ‘search’).

Interestingly, Twitter, unlike Facebook, allows and even supports users creating multiple accounts:

”You can also create and manage multiple Twitter accounts, for example to express different parts of your identity.”

Parler’s policy is a bit different. While other social platforms have some sort of verification, Parler’s verification, although optional, seems to be needed for basic platform features. For example, this FAQ suggests that users without a verified account will be unable to send private messages.

In order to get verified, users will need to provide scans of their government-issued photo IDs, plus a selfie. Parler promises that it deletes the front and back scans of these IDs when they are no longer needed, retaining a “hash corresponding to the information the identification document contains.” The platform also retains the selfie but claims to store it “securely, in encrypted form” without mentioning which encryption is used.

Additionally, Parler allows users to monetize their content through its “Influencer Network.” For that reason, they will “collect information on form W-9 as required by the IRS.”

Third party sharing and collection

Twitter: OK
Parler: OK
Twitter shares data with third parties:

Vendors (such as hosting) and analytics
Payment providers
Ad engagement (anonymized data)
Aggregated statistics for the platform (such as trending topics)
In response to legal requests
Parler’s documentation is less specific, but in general they share data with vendors and analytics, in response to legal requests, etc. It makes a point to “never rent, sell, or share information about you with nonaffiliated third parties for their direct marketing purposes unless we have your affirmative express consent.”

User options

Twitter: good
Parler: bad
Twitter users have many options through their privacy settings. They are able to opt-out of location sharing, targeted advertising, interest-based ads, etc.

Twitter allows its users to easily access or delete their content or accounts. Twitter users are also able to download all the data that Twitter has collected on them.

Parler’s documents don’t offer much in the way of user options. In terms of user choice and control, Parler users are only able to control limited aspects via their privacy settings. Users can also delete their accounts, but the platform doesn’t allow for them to download all the data collected on them.

Data retention and security

Twitter: OK
Parler: OK
Twitter keeps log data for up to 18 months. It offers users a standard 30-day period to reactivate their accounts. However, it doesn’t offer more specific information, such as Facebook offers, about how long it will take to delete content from its servers.

Parler, on its part, also doesn’t offer any specific information about its data retention practices. It only notes the aforementioned government ID deletion information, but again without any time frame.

While Twitter has no mention of its security practices, Parler has dedicated a two-sentence paragraph related to platform security. However, these sentences provide no real meaning or information:

“We make reasonable efforts to protect your information by using physical and electronic safeguards designed to improve the security of the information we maintain. However, as our Services are hosted electronically, we can make no absolute guarantees as to the security or privacy of your information.”

Summary

Twitter: average
Parler: bad
Twitter is a better offering for users than Parler in terms of data collection and processing. Parler requires government-issued IDs for basic platform features and has limited user options.

Facebook and MeWe
MeWe is a privacy-focused, free speech platform that is often seen as a viable alternative to Facebook. It gained popularity after Facebook removed various QAnon and Stop the Steal groups at the end of 2020.

MeWe’s Android app has been installed more than 5 million times.

It is important to note that Facebook has a much larger surface, and many more apps and features in its ecosystem, than MeWe does.

Facebook MeWe
Documents [1],[2],[3] [1],[2]
Words 10894 6157
Reading time 83.8 47.3
Reading ease 46.3 46.4
Network requests 34 11
First party collection and use

Facebook: bad
MeWe: good
Facebook collects more data on its users than MeWe does. The first interesting point for Facebook is that it states it collects information about you even if you don’t have a Facebook account:

“Facebook uses cookies and receives information when you visit those sites and apps, including device information and information about your activity, without any further action from you. This occurs whether or not you have a Facebook account or are logged in.”

When a user agrees to import contacts, Facebook will collect not only the address book, but also a user’s call log and SMS log history:

“We also collect contact information if you choose to upload, sync or import it from a device (such as an address book or call log or SMS log history)…”

Another interesting point is that Facebook collects “device operations,” which includes “whether a window is foregrounded or backgrounded, or mouse movements.” It also collects device signals, including “Bluetooth signals, and information about nearby Wi-Fi access points, beacons, and cell towers.” Lastly, it collects network information about “other devices that are nearby or on your network.”

Furthermore, we found it worth noting that Facebook requires that users have only one account and provide “accurate information” about themselves, including using the name they use in their everyday lives.

Comparatively, MeWe’s first party collection is minimal: it collects the account creation information, UGC, engagement and usage, and log data that includes device information, IP address, OS, etc.

Third party sharing and collection

Facebook: OK
MeWe: good
Facebook shares a user’s data across its integrated products. It also provides aggregated data and insights to its partners and other businesses, for research and academic purposes, and provides anonymous engagement data for advertisers. Their Terms of Service make it clear that they don’t sell a user’s personal data or access to that personal data to advertisers:

“We don’t sell your personal data to advertisers, and we don’t share information that directly identifies you (such as your name, email address or other contact information) with advertisers unless you give us specific permission.”

MeWe also makes it clear what kind of data they share with third parties:

“We don’t track you to sell your data to third parties, and we don’t track you to manipulate your newsfeed and we don’t track you when you are not on MeWe.“

They also emphasize that they don’t use third-party cookies “to target” or “market” to their customers. They provide data to operating partners, as well as any payment-related data.

User options

Facebook: good
MeWe: OK
While MeWe has the more attractive offering, Facebook has a larger list of options for users to choose, control, access, modify and delete data. Most options are included in the user’s privacy settings. Its cookie policy also provides options for users to control what kind of ads they see.

While MeWe has these same features, Facebook allows for users to download all their account data, or delete all of their content by deleting their account. MeWe does not provide this option in its documentation, only stating that users have the “right to delete your account and take your content with you at any time” – without explicitly providing any mechanism to move that data.

Data retention and security

Facebook: bad
MeWe: OK
Facebook promises to delete user data within 90 days.

MeWe does not specifically state a maximum time frame until it deletes a user’s data, only stating that it will delete the data from its production servers “as soon as is technically possible.” MeWe does state that it incorporates a 30-day delay for deletion requests, and that it will delete a user’s data from its backups within 7 months.

It also states that it will delete Log Data, such as the username, IP address, or email address “after a maximum of 12 months.”

Facebook does not have a clear or dedicated section for security in its privacy policy, providing only a small sentence in its ToS that it will “exercise professional diligence” to keep the service “a safe, secure and error-free environment.”

MeWe dedicates three sentences to its security, including encrypting personal information (but not saying what kind of encryption), and using HTTPS for “most, if not all” requests.

Summary
Facebook: OK
MeWe: good
MeWe is better in terms of data collection and processing since it has no ads and collects and processes less data. Facebook also shares more data with third parties, and doesn’t offer any information about the platform’s security. Facebook does, however, have better user options than MeWe.

YouTube and Rumble
Rumble is a video-sharing platform and YouTube alternative that is largely filled with conservative content, regularly related to debunked conspiracy theories.

Rumble’s Android app has been installed at least half a million times, and its website received 83 million visits in December, up from 1.5 million in August (according to SimilarWeb).

YouTube Rumble
Documents [1],[2],[3] [1]
Words 9313 2987
Reading time 71.6 23
Reading ease 50.3 36.6
Network requests 21 5
First party collection and use

YouTube: bad
Rumble: OK
Because YouTube is a Google product, all of the important data collection documents for YouTube are actually for Google at large. Perhaps because of this reason it is much wider, and each document contains less specific information since it seems written to apply to so many Google products. However, unless YouTube is specified, we assume these data collection policies apply to all Google products.

YouTube’s personal information collection is similar to other platforms – account creation and any payment information – but it is distinguished in that publicly available information is also collected:

“In some circumstances, Google also collects information about you from publicly accessible sources.“

Naturally, this applies to Google’s search engine, but how much is information shared across Google’s products?

The UGC collected by YouTube is pretty standard, with the specification that YouTube users’ engagement activity offsite is also collected. Similarly, device information collected is pretty broad, covering Android-related analytics, log data, and location data – which includes GPS, IP address, device sensor data, plus wifi access points and Bluetooth-enabled devices near the user’s device.

Rumble in comparison collects much less. It collects standard account creation information, plus any information collected when a user creates an account using a third-party social platform.

Rumble doesn’t list collecting/processing UGC, and doesn’t directly state that the platform processes imported contacts. However, its “Changing or Deleting Your Information” section allows the user to delete “any imported contacts.”

Third party sharing and collection

YouTube: bad
Rumble: OK
YouTube’s (Google’s) third-party data sharing is largely confined to any account administrators that the user may have, Google’s business partners, anonymized ad reporting, and in response to legal requests.

It allows third parties to collect users’ browser or device information for advertising and measurement, using their own third-party cookies, beacons, etc.

Rumble’s sharing practices are pretty standard, but practically less than YouTube’s. It shares aggregate or non-identifying data with third parties for analysis, profiling, and other purposes. It also shares data with vendors, linked social media sites, and of course in response to legal requests.

User options
YouTube: good
Rumble: bad
YouTube (Google) allows users to control their privacy via their account/privacy settings. This includes ad setting and YouTube history settings. YouTube (Google) also allows users easy ways to manage, review and update their info, and delete their content or entire accounts.This includes the ability to download all collected account data.

Rumble offers limited choices, at least in its privacy policy. Users can opt out of emails, change cookie settings, and remove linked social accounts. There is no option to download all accumulated account data, and Rumble’s allows users to “review, update, correct or delete the Personal Information” in their accounts.

Data retention and security

YouTube: good
Rumble: bad
YouTube (Google) gives good information on varying data retention periods. They specify that content data and activity information can be deleted whenever a user prefers, while advertising data is deleted or anonymized automatically at set periods of time.

In order to get a clearer picture, we had to go to Google’s designated data retention page. Here, Google claims to delete information immediately from public view when the user requests it, and then begins the process to remove it from their systems, which generally takes two months, plus the standard 30-day waiting period – but it does not provide a maximum allowed time.

Ad log data is anonymized by removing part of the IP address after 9 months and removing cookie information after 18 months. However, it appears that this data is never deleted.

YouTube (Google) has a dedicated security section, the most in-depth of all the platforms here.

Rumble has a less attractive data retention policy. It does not provide detailed information on what the retention periods are for various types of data. It also implies that not all the information may be deleted. Its entire data retention policy is concluded in a few sentences:

The privacy policy directs users to go to their Terms of Services page (which is actually their “Terms & Conditions”) and section “Sharing Your Content” apparently for more information on data retention. However, no such section exists on its Terms & Conditions page, and there is no further information on data retention.

Rumble does, at least, have a designated section for security, although the promises are sparse, as they commit to “use commercially reasonable safeguards” to protect user data. However, it also includes a breach notification section in which it will communicate to their users via email or “conspicuous posting” on Rumble as soon as possible. None of the other platforms have this information.

Summary
YouTube: OK
Rumble: bad
YouTube narrowly beats out Rumble in terms of its data collection and processing policies. YouTube (Google) collects and processes too much data, but it offers better user choices, offers data portability, and has clearer data retention policies. While Rumble collects less data, it doesn’t offer as many options for the user.

Reddit and Voat (R.I.P.)

Voat was a Reddit clone that allowed for “free speech” without moderation, except in extreme cases, and offered users the chance to share in ad revenue. Voat shut down its services on December 25, 2020, apparently after an investor backed out in March. It had about 3 million monthly visitors.

Reddit Voat
Documents [1] [1],[2]
Words 4305 2173.0
Reading time 33.1 16.7
Reading ease 39.5 39.8
Network requests 14 N/A
First party collection and use

Reddit: OK
Voat: bad
Reddit collects the standard personal information (account creation information, payment data and other information provided by the user), UGC and engagement activity, and device information (log and usage data, cookies, and IP address, Bluetooth or GPS location data).

Voat’s first party collection policy is non-standard, since it provides almost no real information. It claims to collect account creation information, log and usage data and cookies. But it doesn’t discuss the UGC or engagement data that a social platform normally collects.

Third party sharing and collection

Reddit: OK
Voat: bad
Reddit shares user data in a standard way. However, it also claims to share data with any “parents, affiliates, subsidiaries, and other companies under common control and ownership.” Beyond that, it interestingly notes that it will also share personal information in emergency situations “to prevent imminent and serious bodily harm to a person.”

While common sense would dictate that Voat has similar data sharing practices to other platforms here, it only admits to using Google Recaptcha:

“Voat uses Google Recaptcha in order to minimize spam. For more information about how Google handles recorded data, please consult the Google Privacy Policy.”

User choice

Reddit: good
Voat: bad
Reddit provides users with a detailed list of options, including editing and deleting information, removing linked services, changing cookie settings, opting out of ads and Do Not Track, mobile notifications and even location settings.

Reddit also provides information on how to delete content or the entire account, plus it allows users to submit a request to get all their account and activity data. However, it may take up to 30 days to process the request.

Unsurprisingly, Voat offers no information about any user choices to update settings or access, edit and delete their information.

Data retention and security
Reddit: bad
Voat: bad
Reddit’s data retention policy is very short and provides no practical information:

“We store the information we collect for as long as it is necessary for the purpose(s) for which we originally collected it. We may retain certain information for legitimate business purposes or as required by law.”

It does have a separate section for security, however, with information on HTTPS usage and access controls for its employees.

Voat, again unsurprisingly, has practically no information on its data retention practices. In its Terms & Conditions, it discusses its security with the following:

“Please don’t hack us 🙂 We support the responsible reporting of security vulnerabilities. To report a Voat security issue, please send an email to hello@voat.co.”

Summary
Reddit: OK
Voat: bad
Reddit is the clear winner here as Voat’s data collection documents are too short, vague and practically useless to give users a good idea of what data is collected and what happens to that data.

TikTok and Triller
Triller is a short-form video sharing platform similar to TikTok that was popularized when Trump first raised concerns about TikTok. Triller’s Android app has been installed more than 10 million times.

TikTok’s data collection policies can be found in its comprehensive privacy policy, which lists three different versions for US, European and non-European/non-US users. It’s worth noting that the European version is 67% longer than the US version.

TikTok Triller
Document [1] [1]
Words 2964 8629.0
Reading time 22.8 66.4
Reading ease 37.1 44.3
Network requests 54 32
First party collection and use
TikTok: bad
Triller: bad
TikTok’s data collection is for the most part standard – account creation and payment information for the personal information category. The also list that they collect “information to verify an account,” which is common for Parler, Facebook, and other platforms at certain points (for example, Facebook will ask for verification if there is a problem or some suspicions around your account, whereas Parler will ask for verification information immediately when you join the platform).

Interestingly, however, is that they also claim to collect information about users “from other publicly available sources.” This is understandable for YouTube (Google), since it’s a search engine, but less clear in TikTok’s case.

Content-wise, they collect uploaded contact information, UGC, engagement, etc. They also collect device information and location data (from the SIM card and/or IP address, or GPS with the user’s permission).

Triller seems to collect a similar amount of data. However, the one aspect that is worth notice is that Triller doesn’t handle its own messaging. Instead, it outsources all messaging functionality to a third-party known as Quickblox (even though Triller spells it “Quickblocks”). Triller still collects message-related data, including:

“Personal Information, in the context of composing, sending, or receiving messages to other Users (that means the content as well as information about when the message has been sent, received and/or read and the participants of the communication) through our Service’s messaging functionality.”

However, Triller’s privacy policy doesn’t state whether Quickblox collects and processes this data as well. When we approached Quickblox about this, a representative told CyberNews that “we no longer have a business relationship with Triller and we will be in contact with them to remove our mis-spelt name from their website.”

Third party sharing and collection
TikTok: OK
Triller: OK
TikTok has been accused of sharing user data with the Chinese government. However, inside its privacy policies there is nothing particularly salacious.

They share data with third party vendors and analytics, payment processors, researchers, anonymized ad data, etc. They also share data in response to legal requests, and “with consent” linked social accounts.

Lastly, they claim to share user information with “a parent, subsidiary, or other affiliate of our corporate group.” While its parent company is Chinese, TikTok has repeatedly claimed to not share user data with the Chinese government, or even store data in China.

Triller has nearly the same data sharing policy, with the addition of allowing third-party tracking cookies and other technology from ad partners who “may collect Personal Information when you visit the Platform or other online websites and services.”

User options

TikTok: bad
Triller: bad
Users have a variety of choices on TikTok to control the amount of data being collected. This includes disabling cookies, opting out of ads, limiting location data, and accessing or editing account information; TikTok aso respects Do Not Track requests.

However, TikTok doesn’t provide a way for users to download all their account data. Furthermore, there is no easy way to delete content besides doing so manually on a video-by-video basis or deleting the entire account. Even when deleting an account, it’s not clear if the account data is deleted from TikTok’s systems. Instead, they require users to send a request via email or physical post to view or delete all collected data:

“You may submit a request to access or delete the information we have collected about you by sending your request to us at the email or physical address provided in the Contact section at the bottom of this policy. We will respond to your request consistent with applicable law and subject to proper verification.“

At least, this is the US version of their privacy policy. The EU version is longer, but it doesn’t present much better options:

“You can ask us, free of charge, to confirm we process your personal data and for a copy of your personal data.”

It is almost laughable, in view of the other social platforms in this research, that they mention the ability to ask them to confirm or download all account data as “free of charge,” or in general that they expect users to send physical mail to do so.

Triller doesn’t fare much better. It allows users to change location settings, cookies, and access or edit their account information. However, it does not respect Do Not Track, instead claiming that “many websites and online services” follow the same practice.

When it comes to data portability, or allowing the account holder to view and get a copy of the accumulated account information, they have a pretty similar position as TikTok’s EU version.

In Triller’s version, data portability and data deletion requests are to be sent to an email address only, but the information that can be requested only covers “the past 12 months.”

Data retention and security

TikTok: bad
Triller: bad
TikTok and Triller also have similar approaches to data retention and security.

That is to say: they do not have any clear data retention policies, but they do have a separate section on security. TikTok’s security section is small, with only three sentences and no practical information.

While Triller’s security section is much larger at 8 sentences, the information is only vaguely helpful with promises of “generally accepted industry standards” for account security.

Summary
TikTok: bad
Triller: bad
Overall, both TikTok and Triller perform poorly, requesting too much data, providing too few user options, and a lack of clear data retention and difficult data portability.

For Recommendations see the legitimate post at:
https://cybernews.com/privacy/how-parler-twitter-facebook-mewe-data-policies-compare/


Facebook Awards Big Bounties for Invisible Post and Account Takeover Vulnerabilities
13.1.2021 
Social  Securityweek

One researcher said he earned $30,000 from Facebook for finding a vulnerability that could have been exploited to create invisible posts on any page. The same amount was paid out to a different researcher for an account hijacking flaw.

Bug bounty hunter Pouya Darabi discovered in November that an attacker could have created invisible posts on any Facebook page, including verified pages, without having any permissions on the targeted page.

The researcher found the vulnerability while analyzing Creative Hub, a tool that allows Facebook users to create and preview ads for Facebook, Instagram or Messenger. Creative Hub enables users to collaborate on ad mockups and the ads can be previewed by creating an invisible post on the selected page.

These invisible posts have an ID and a link, but they are not visible on the page where they have been created — they can only be viewed by users who have the link.

Darabi discovered that changing the page_id parameter in a request sent when creating such an invisible post leads to the post being created on the Facebook page associated with the specified page_id. “All we need to do is to find the post_id that exists on any ad preview endpoints,” he explained.

However, when an invisible post is created to preview an ad, Facebook checks if the user has the permissions needed to post on the targeted page. The researcher bypassed this requirement by abusing the “Share” feature in Creative Hub, which creates a link that gives others access to the ad preview. The permission check was missing when this Share feature was used, enabling an attacker to create invisible posts on pages where they did not have any role.

This vulnerability could have been highly useful to malicious actors as it would have allowed them to create posts with any content — this includes scams and malicious links — on any Facebook page, making their posts more likely to be trusted by users. Darabi told SecurityWeek that an attacker could have easily shared the invisible post on Facebook groups, profiles and pages.

“These types of posts are not shown on the feed timeline but are accessible via a direct link,” Darabi explained in a blog post. “The main impact of these types of posts is that the page admins cannot view or delete them since they don't have any links.”

Darabi reported his findings to Facebook on November 6 and the social media giant implemented a fix within a week. However, the researcher managed to bypass the fix. He earned $15,000 for finding the vulnerability and another $15,000 for bypassing Facebook’s patch. The company said it had found no evidence of exploitation for malicious purposes.

Bug bounty hunter Youssef Sammouda also reported finding an interesting Facebook vulnerability recently. He also earned $30,000 from Facebook for a security flaw he reported to the company in November 2020.

Sammouda discovered a cross-site scripting (XSS) vulnerability on a subdomain for Facebook’s Oculus VR headsets, which ultimately allowed him to hijack both Oculus and Facebook accounts. The researcher published a blog post detailing his findings earlier this month.


Twitter has permanently suspended the account of President Donald Trump
10.1.2021 
Social  Securityaffairs

Twitter has permanently suspended the account of President Donald Trump on Friday, due to the risk of further incitement of violence.
Twitter has permanently suspended President Donald Trump’s account fearing his tweets may trigger a new wave of violence.

In response to the attack on the U.S. Capitol, the President’s account was initially suspended for 12 hours on Wednesday, the social media platform said that its decision was caused by “severe violations of our Civic Integrity policy.”

“After close review of recent Tweets from the @realDonaldTrump account and the context around them — specifically how they are being received and interpreted on and off Twitter — we have permanently suspended the account due to the risk of further incitement of violence. ” the company announced in a tweet.

The company has threatened permanent suspension for repeated violations of its policies, but President Trump ignored the warning and posted the following messages:

“The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!”

Shortly thereafter, the President tweeted:

“To all of those who have asked, I will not be going to the Inauguration on January 20th.”

Twitter finally decided to suspend the President’s account after assessing the above tweets under its Glorification of Violence policy, which aims to prevent the glorification of violence that could inspire trigger violent acts.

Below an excerpt of the interpretation of the messages published by Twitter:

“The second Tweet may also serve as encouragement to those potentially considering violent acts that the Inauguration would be a “safe” target, as he will not be attending.” states the company.

“The use of the words “American Patriots” to describe some of his supporters is also being interpreted as support for those committing violent acts at the US Capitol.”

“Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.”

Trump account suspended 2


Facebook’s Mandatory Data-Sharing Rules for WhatsApp Spark Ire

8.1.2021  Social  Threatpost

The messaging platform will update its privacy platform on Feb. 8 to integrate further with its parent company, prompting users to cry foul over privacy issues.

WhatsApp is asking users to accept a new privacy policy that will share all of their data with Facebook beginning Feb. 8, a move that has users sounding an alarm once again about the privacy of their information in the hands of the social media giant.

The Facebook-owned messaging service already has sent ultimatum-like pop-up messages to users in some regions, including India, asking them to accept the new privacy regulations or risk losing their accounts, according to reports.

“WhatApp is updating its terms and privacy policy,” the notification said, according to one report. “The new update makes it mandatory for the users to accept the terms and conditions in order to retain their WhatsApp account information.”
2020 Reader Survey: Share Your Feedback to Help Us Improve

WhatsApp chose a rather curious time to begin informing users of the change, as it comes just a few days after the company updated its Privacy Policy and Terms of Service, reiterating its commitment to privacy for its users and their messages.

“We’ve built privacy, end-to-end encryption, and other security features into WhatsApp,” the company assured users in the update. “We don’t store your messages once they’ve been delivered. When they are end-to-end encrypted, we and third parties can’t read them.”

The move also comes at a time when Facebook is embroiled in twin antitrust suits filed by dozens of state and the federal government that call for the tech giant to be broken up due to exactly this type of activity. The lawsuits allege that the company has abused its dominance in the digital marketplace and engaged in anticompetitive behavior.

WhatsApp already shares “certain categories of information” with Facebook, which purchased the messaging service in February 2014 and has been gradually integrating the two platforms and what data is shared between them more closely.

This information disclosed to the Facebook Companies already adds up to a fair bit of data, includes users’ account registration information, such as phone number; transaction data; service-related information; data on how users interact with others, including businesses; mobile device information,; IP address; as well as other info identified as information users have given the service consent to collect, according to WhatsApp.

The expansion in data sharing between the two platforms will now ask users to provide payment account and transaction information to WhatsApp, according to one report. If they don’t want to do this, they can choose instead to delete their account, according to the notification WhatsApp sent to users.

The move to give users no choice in the matter of whether they support an expansion of data sharing between the two platforms is ostensibly a way for Facebook to provide targeted advertising so the parent company can further monetize its messaging asset, according to reports.

Users, for their part, are less than pleased with the situation. The increasingly tight relationship between Facebook and WhatsApp already has seen a migration of users to other messaging services, including Telegram and Signal.

Some users predictably took to Twitter to grumble publicly about the upcoming change in WhatsApp’s privacy policy.

“These updates in the terms and privacy policy of #WhatsApp makes it even worse than they were before,” tweeted the Free Software Foundation (FSF) in Tamil Nadu, India. “The data of individual users and people in their contacts has been shared with n number of 3rd parties and Facebook and its services.” FSF is a nonprofit organization for the creation and distribution of free software programs and applications.

Another user said the move seems to be an attempt for Facebook to keep users interested in the platform, as public opinion of the social network has inspired users to abandon it or use it less frequently.

“Woke up to Whatsapp’s (Facebook’s?) take it or leave it styled #PrivacyPolicy update,” tweeted Abhiram, a “technology enthusiast” and WhatsApp user based in Chennai, India. “Desperate times call for moves to keep Facebook alive and relevant.”

Facebook is no stranger to accusations of violating users privacy, problems that have cost the company billions in fines and, perhaps more importantly, fueled mistrust among users.

Last year the company started reporting its privacy practices to a newly formed, independent Privacy Committee as part of a 2019 settlement with the Federal Trade Commission (FTC) over data privacy violations from the Cambridge Analytica privacy debacle.


WhatsApp will share your data with Facebook and its companies
7.1.2021 
Social  Securityaffairs

WhatsApp is notifying users that starting February 8, 2021, they will be obliged to share their data with Facebook, leaving them no choice.
This is bad news for WhatsApp users and their privacy, the company is notifying them that starting February 8, 2021, they will be requested to share their data with Facebook companies.

Curiously the announcement comes a few days after the company has updated its Privacy Policy and Terms of Service.
“Respect for your privacy is coded into our DNA,” states WhatsApp’s privacy policy. “Since we started WhatsApp, we’ve aspired to build our Services with a set of strong privacy principles in mind.”

According to Facebook, the move aims at improving the users’ experience with targeting advertising. WhatsApp currently shares specific information with Facebook companies, including account registration data, transaction data, and service-related information.

“WhatsApp currently shares certain categories of information with Facebook Companies. The information we share with the other Facebook Companies. includes your account registration information (such as your phone number), transaction data, service-related information, information on how you interact with others (including businesses) when using our Services, mobile device information, your IP address, and may include other information identified in the Privacy Policy section entitled ‘Information We Collect’ or obtained upon notice to you or based on your consent.” states WhatsApp.

The new policy increases the type of information that users will provide to the company, including payment account and transaction information.

WhatsApp will share data with Facebook Companies, including Facebook, Facebook Payments, Onavo, Facebook Technologies, and CrowdTangle.

Users that will not agree to the updated policy will no more able to access their accounts and could delete their accounts.

“By tapping AGREE, you accept the new terms and privacy policy, which take effect on February 8, 2021,” reads the notification sent to the WhatsApp users.

“After this date, you’ll need to accept these updates to continue using WhatsApp. You can also visit the Help Center if you would prefer to delete your account and would like more information.”

“We collect information about your activity on our Services, like service-related, diagnostic, and performance information. This includes information about your activity (including how you use our Services, your Services settings, how you interact with others using our Services (including when you interact with a business), and the time, frequency, and duration of your activities and interactions), log files, and diagnostic, crash, website, and performance logs and reports. This also includes information about when you registered to use our Services; the features you use like our messaging, calling, Status, groups (including group name, group picture, group description), payments or business features; profile photo, “about” information; whether you are online, when you last used our Services (your “last seen”); and when you last updated your “about” information.”

WhatsApp’s updated policy includes information on how the company automatically collects information about user activity.

Privacy advocates also fear the way WhatsApp collects the huge trove of metadata.


Telegram Triangulation Pinpoints Users’ Exact Locations

6.1.2021  Social  Threatpost

The “People Nearby” feature in the secure messaging app can be abused to unmask a user’s precise location, a researcher said.

A feature that allows Telegram users to see who’s nearby can be misused to pinpoint your exact distance to other users – by spoofing one’s latitude and longitude.

According to bug-hunter Ahmed Hassan, the “People Nearby” feature could allow an attacker to triangulate the location of unsuspecting Telegram users. The feature is disabled by default, but as Hassan pointed out, “Users who enable this feature are not aware they are basically publishing their precise location.”

2020 Reader Survey: Share Your Feedback to Help Us Improve

The feature lists exactly how far people are from one’s location (1.3 miles and so on). This isn’t an issue as long as that number remains a radius. But it’s possible to spoof one’s location for three different points, and then use the resulting three distances to precisely pinpoint where a target is, the researcher found.

Courtesy: Ahmed’s Notes.

To spoof a GPS location, an adversary has various options, but the easiest method, Hassan noted in a Monday blog, is to “just walk around the area, collect the GPS latitude and longitude of yourself, and how far the target person is from you (super easy).”

Another option is to use a GPS-spoofing app.

“There is an app in the [Google Play] store called GPS spoof; download it and install it,” he noted. “After [that]…spoof the location near the user within a seven-mile radius limit. That’s the limit Telegram has in place…then collect how far that person is from that point. Repeat three times.”

Armed with the three locations, an attacker can then open Google Earth Pro, plug in the spoofed locations, and use a ruler to find the middle point between the three.

“The intersection of the three circles is the location of the user,” Hassan explained. “To verify this, I added one of the users and asked them if they live near the point. I was able to get that user’s exact home address.”

For Telegram’s part, the company said it doesn’t regard the issue as a bug, and declined Hassan’s security report.

Triangulation. Courtesy: Ahmed’s Notes.

“Users in the People Nearby section intentionally share their location, and this feature is disabled by default,” was Telegram’s response, according to the researcher. “It’s expected that determining the exact location is possible under certain conditions. Unfortunately, this case is not covered by our bug-bounty program.”

To fix it, the company could round user locations to the nearest mile “and add a static random noise,” Hassan said. “Tinder had the same issue and they fixed it by creating buckets.”

Telegram did not immediately return a request for comment.


WhatsApp Will Delete Your Account If You Don't Agree Sharing Data With Facebook
6.1.2021 
Social  Thehackernews
"Respect for your privacy is coded into our DNA," opens WhatsApp's privacy policy. "Since we started WhatsApp, we've aspired to build our Services with a set of strong privacy principles in mind."

But come February 8, 2021, this opening statement will no longer find a place in the policy.

The Facebook-owned messaging service is alerting users in India of an update to its terms of service and privacy policy that's expected to go into effect next month.

The "key updates" concern how it processes user data, "how businesses can use Facebook hosted services to store and manage their WhatsApp chats," and "how we partner with Facebook to offer integrations across the Facebook Company Products."

Users failing to agree to the revised terms by the cut-off date will have their accounts deleted, the company said in the notification.

WhatsApp's Terms of Service was last updated on January 28, 2020, while its current Privacy Policy was enforced on July 20, 2020.

Facebook Company Products refers to the social media giant's family of services, including its flagship Facebook app, Messenger, Instagram, Boomerang, Threads, Portal-branded devices, Oculus VR headsets (when using a Facebook account), Facebook Shops, Spark AR Studio, Audience Network, and NPE Team apps.

It, however, doesn't include Workplace, Free Basics, Messenger Kids, and Oculus Products that are tied to Oculus accounts.

What's Changed in its Privacy Policy?
In its updated policy, the company expanded on the "Information You Provide" section with specifics about payment account and transaction information collected during purchases made using the app and has replaced the "Affiliated Companies" section with a new "How We Work With Other Facebook Companies" that goes into detail about how it uses and shares the information gathered from WhatsApp with other Facebook products or third-parties.

This encompasses promoting safety, security, and integrity, providing Portal and Facebook Pay integrations, and last but not least, "improving their services and your experiences using them, such as making suggestions for you (for example, of friends or group connections, or of interesting content), personalizing features and content, helping you complete purchases and transactions, and showing relevant offers and ads across the Facebook Company Products."

One section that's received a major rewrite is "Automatically Collected Information," which covers "Usage and log Information," "Device And Connection Information," and "Location Information."

"We collect information about your activity on our Services, like service-related, diagnostic, and performance information. This includes information about your activity (including how you use our Services, your Services settings, how you interact with others using our Services (including when you interact with a business), and the time, frequency, and duration of your activities and interactions), log files, and diagnostic, crash, website, and performance logs and reports. This also includes information about when you registered to use our Services; the features you use like our messaging, calling, Status, groups (including group name, group picture, group description), payments or business features; profile photo, "about" information; whether you are online, when you last used our Services (your "last seen"); and when you last updated your "about" information."

With regards to the device and connection data, WhatsApp spelled out the pieces of information it gathers: hardware model, operating system information, battery level, signal strength, app version, browser information, mobile network, connection information (including phone number, mobile operator or ISP), language and time zone, IP address, device operations information, and identifiers (including identifiers unique to Facebook Company Products associated with the same device or account).

"Even if you do not use our location-related features, we use IP addresses and other information like phone number area codes to estimate your general location (e.g., city and country)," WhatsApp updated policy reads.

Concerns About Metadata Collection
While WhatsApp is end-to-end encrypted, its privacy policy offers an insight into the scale and wealth of metadata that's amassed in the name of improving and supporting the service. Even worse, all of this data is linked to a user's identity.

Apple's response to this unchecked metadata collection is privacy labels, now live for first- and third-party apps distributed via the App Store, that aim to help users better understand an app's privacy practices and "learn about some of the data types an app may collect, and whether that data is linked to them or used to track them."

The rollout forced WhatsApp to issue a statement last month. "We must collect some information to provide a reliable global communications service," it said, adding "we minimize the categories of data that we collect" and "we take measures to restrict access to that information."

In stark contrast, Signal collects no metadata, whereas Apple's iMessage makes use of only email address (or phone number), search history, and a device ID to attribute a user uniquely.

There's no denying that privacy policies and terms of service agreements are often long, boring, and mired in obtuse legalese as if deliberately designed with an intention to confuse users. But updates like this are the reason it's essential to read them instead of blindly consenting without really knowing what you are signing up for. After all, it is your data.


Facebook ads used to steal 615000+ credentials in a phishing campaign
2.1.2021 
Social  Securityaffairs

Cybercriminals are abusing Facebook ads in a large-scale phishing scam aimed at stealing victims’ login credentials.
Researchers from security firm ThreatNix spotted a new large-scale campaign abusing Facebook ads. Threat actors are using Facebook ads to redirect users to Github accounts hosting phishing pages used to steal victims’ login credentials.
Facebook phishing

The campaign targeted more than 615,000 users in multiple countries including Egypt, the Philippines, Pakistan, and Nepal.

The landing pages are phishing pages that impersonate legitimate companies. Once the victims provided the credentials, they will be sent to the attackers through a Firestore database and a domain hosted on GoDaddy.

“Our researchers first came across the campaign through a sponsored Facebook post that was offering 3 GB mobile data from Nepal Telecom and redirecting to a phishing site hosted on GitHub pages.” reads the post published Threatnix.

Facebook phishing
The campaign appears well orchestrated, threat actors used localized Facebook posts and pages that mimic legitimate organizations and targeted ads for specific countries.
The scammers used an intriguing trick to avoid detection, the used shortened URL that initially points to a benign page that is modified after the approval of the ads.

“While Facebook takes measures to make sure that such phishing pages are not approved for ads, in this case the scammers were using Bitly link’s which initially must have pointed to a benign page and once the ad was approved, was modified to point to the phishing domain.” continues the post.

Attackers behind this campaign used at least 500 Github repositories hosting phishing pages, some of which are already inactive. The first phishing page was created in GitHub 5 months ago.

“Following some digging we were able to gain access to those phished credentials. At the time of writing this post there appears to be more than 615,000+ entries and the list is growing at a rapid pace of more than a 100 entries per minute.” concludes the post.
Experts are working with relevant authorities to take down the phishing infrastructure used in this campaign.

In October, Facebook detailed an ad-fraud cyberattack that’s been ongoing since 2016, crooks are using a malware tracked as SilentFade (short for “Silently running Facebook Ads with Exploits”) to steal Facebook credentials and browser cookies.

The social network giant revealed that malware has a Chinese origin and allowed hackers to siphon $4 million from users’ advertising accounts.

Threat actors initially compromised Facebook accounts, then used them to steal browser cookies and carry out malicious activities, including the promotion of malicious ads.