Security  2024  2023  2022  2021  2020


Hackers Abusing GitHub to Evade Detection and Control Compromised Hosts
19.12.23  Security  The Hacker News


Threat actors are increasingly making use of GitHub for malicious purposes through novel methods, including abusing secret Gists and issuing malicious commands via git commit messages.

"Malware authors occasionally place their samples in services like Dropbox, Google Drive, OneDrive, and Discord to host second stage malware and sidestep detection tools," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News.

"But lately, we have observed the increasing use of the GitHub open-source development platform for hosting malware."

Legitimate public services are known to be used by threat actors for hosting malware and acting as dead drop resolvers to fetch the actual command-and-control (C2) address.

This technique is sneaky as it allows threat actors to blend their malicious network traffic with genuine communications within a compromised network, making it challenging to detect and respond to threats in an effective manner. As a result, the chances that an infected endpoint communicating with a GitHub repository will be flagged as suspicious is less likely.

The abuse of GitHub gists points to an evolution of this trend. Gists, which are nothing but repositories, offer an easy way for developers to share code snippets with others.

It's worth noting at this stage that public gists show up in GitHub's Discover feed, while secret gists, although not accessible via Discover, can be shared with others by sharing its URL.

"However, if someone you don't know discovers the URL, they'll also be able to see your gist," GitHub notes in its documentation. "If you need to keep your code away from prying eyes, you may want to create a private repository instead."

Another interesting aspect of secret gists is that they are not displayed in the GitHub profile page of the author, enabling threat actors to leverage them as some sort of a pastebin service.

ReversingLabs said it identified several PyPI packages – namely, httprequesthub, pyhttpproxifier, libsock, libproxy, and libsocks5 – that masqueraded as libraries for handling network proxying, but contained a Base64-encoded URL pointing to a secret gist hosted in a throwaway GitHub account without any public-facing projects.

The gist, for its part, features Base64-encoded commands that are parsed and executed in a new process through malicious code present in the setup.py file of the counterfeit packages.

The use of secret gists to deliver malicious commands to compromised hosts was previously highlighted by Trend Micro in 2019 as part of a campaign distributing a backdoor called SLUB (short for SLack and githUB).

A second technique observed by the software supply chain security firm entails the exploitation of version control system features, relying on git commit messages to extract commands for execution on the system.

The PyPI package, named easyhttprequest, incorporates malicious code that "clones a specific git repository from GitHub and checks if the 'head' commit of this repository contains a commit message that starts with a specific string," Zanki said.

"If it does, it strips that magic string and decodes the rest of the Base64-encoded commit message, executing it as a Python command in a new process." The GitHub repository that gets cloned is a fork of a seemingly legitimate PySocks project, and it does not have any malicious git commit messages.

All the fraudulent packages have now been taken down from the Python Package Index (PyPI) repository.

"Using GitHub as C2 infrastructure isn't new on its own, but abuse of features like Git Gists and commit messages for command delivery are novel approaches used by malicious actors," Zanki said.


Microsoft Expands Cloud Logging to Counter Rising Nation-State Cyber Threats
20.7.23  Security  The Hacker News
Microsoft on Wednesday announced that it's expanding cloud logging capabilities to help organizations investigate cybersecurity incidents and gain more visibility after facing criticism in the wake of a recent espionage attack campaign aimed at its email infrastructure.

The tech giant said it's making the change in direct response to increasing frequency and evolution of nation-state cyber threats. It's expected to roll out starting in September 2023 to all government and commercial customers.

"Over the coming months, we will include access to wider cloud security logs for our worldwide customers at no additional cost," Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft, said. "As these changes take effect, customers can use Microsoft Purview Audit to centrally visualize more types of cloud log data generated across their enterprise."

As part of this change, users are expected to receive access to detailed logs of email access and more than 30 other types of log data previously only available at the Microsoft Purview Audit (Premium) subscription level. On top of that, the Windows maker said it's extending the default retention period for Audit Standard customers from 90 days to 180 days.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) welcomed the move, stating "having access to key logging data is important to quickly mitigating cyber intrusions" and that it's "a significant step forward toward advancing security by design principles."

The development comes in the aftermath of disclosures that a threat actor operating out of China, dubbed Storm-0558, breached 25 organizations by exploiting a validation error in the Microsoft Exchange environment.

The U.S. State Department, which was one among the affected entities, said it was able to detect the malicious mailbox activity in June 2023 due to enhanced logging in Microsoft Purview Audit, specifically using the MailItemsAccessed mailbox-auditing action, prompting Microsoft to investigate the incident.

But other impacted organizations said they were unable to detect that they were breached because they were not subscribers of E5/A5/G5 licenses, which come with elevated access to various kinds of logs that would be crucial to investigate the hack.
Attacks mounted by the actor are said to have commenced on May 15, 2023, although Redmond said that the adversary has displayed a propensity for OAuth applications, token theft, and token replay attacks against Microsoft accounts since at least August 2021.

Microsoft, in the meanwhile, is continuing to probe the intrusions, but to date the company hasn't explained how the hackers were able to acquire an inactive Microsoft account (MSA) consumer signing key to forge authentication tokens and obtain illicit access to customer email accounts using Outlook Web Access in Exchange Online (OWA) and Outlook.com.

"The objective of most Storm-0558 campaigns is to obtain unauthorized access to email accounts belonging to employees of targeted organizations," Microsoft revealed last week.

"Once Storm-0558 has access to the desired user credentials, the actor signs into the compromised user's cloud email account with the valid account credentials. The actor then collects information from the email account over the web service."


New Mozilla Feature Blocks Risky Add-Ons on Specific Websites to Safeguard User Security
10.7.23  Security  The Hacker News
Firefox Quarantined Domains
Mozilla has announced that some add-ons may be blocked from running on certain sites as part of a new feature called Quarantined Domains.

"We have introduced a new back-end feature to only allow some extensions monitored by Mozilla to run on specific websites for various reasons, including security concerns," the company said in its Release Notes for Firefox 115.0 released last week.

The company said the openness afforded by the add-on ecosystem could be exploited by malicious actors to their advantage.

"This feature allows us to prevent attacks by malicious actors targeting specific domains when we have reason to believe there may be malicious add-ons we have not yet discovered," Mozilla said in a separate support document.

Users are expected to have more control over the setting for each add-on, starting with Firefox version 116. That said, it can be disabled by loading "about:config" in the address bar and setting "extensions.quarantinedDomains.enabled" to false.

The development adds to Mozilla's existing capability to remotely disable individual extensions that pose a risk to user privacy and security.

It's worth noting that the warning appears in the Extensions popup rather than on the Extensions icon in the current implementation, as a result of which the alerts are not displayed should an add-on be pinned to the toolbar.

Firefox Quarantined Domains
"It turns out that when you pin an extension to the toolbar, it no longer appears in the Extensions popup!," security researcher and add-on developer Jeff Johnson noted.

"Consequently, the quarantined domains warning no longer appears in the Extensions popup either. In fact, there's no longer an Extensions popup: clicking the Extensions toolbar icon simply opens the about:addons page, which doesn't show the quarantined domains warning anywhere."
"This is a terrible user interface design for the new so-called 'security' feature, silently disabling extensions while hiding the warning from the user," Johnson added.

Mozilla has said that it intends to improve the user experience in future releases, although it did not give a definitive timeline.

The change also comes as Mozilla decried a browser-based website blocking proposal put forth by France that would require browser vendors to establish mechanisms to mandatorily block websites present on a government-provided list to tackle online fraud.

"Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools," the company said.


CAPTCHA-Breaking Services with Human Solvers Helping Cybercriminals Defeat Security
30.5.23  Security  The Hacker News
CAPTCHA
Cybersecurity researchers are warning about CAPTCHA-breaking services that are being offered for sale to bypass systems designed to distinguish legitimate users from bot traffic.

"Because cybercriminals are keen on breaking CAPTCHAs accurately, several services that are primarily geared toward this market demand have been created," Trend Micro said in a report published last week.

"These CAPTCHA-solving services don't use [optical character recognition] techniques or advanced machine learning methods; instead, they break CAPTCHAs by farming out CAPTCHA-breaking tasks to actual human solvers."

CAPTCHA – short for Completely Automated Public Turing test to tell Computers and Humans Apart – is a tool for differentiating real human users from automated users with the goal of combating spam and restricting fake account creation.

While CAPTCHA mechanisms can be a disruptive user experience, they are seen as an effective means to counter attacks from bot-originating web traffic.

The illicit CAPTCHA-solving services work by funneling requests sent by customers and delegating them to their human solvers, who work out the solution and submit the results back to the users.

This, in turn, is achieved by calling an API to submit the CAPTCHA and invoking a second API to get the results.

CAPTCHA
"This makes it easy for the customers of CAPTCHA-breaking services to develop automated tools against online web services," security researcher Joey Costoya said. "And because actual humans are solving CAPTCHAs, the purpose of filtering out automated bot traffic through these tests are rendered ineffective."

That's not all. Threat actors have been observed purchasing CAPTCHA-breaking services and combining them with proxyware offerings to obscure the originating IP address and evade antibot barriers.
Proxyware, although marketed as a utility to share a user's unused internet bandwidth with other parties in return for a "passive income," essentially turns the devices running them into residential proxies.

In one instance of a CAPTCHA-breaking service targeting popular social commerce marketplace Poshmark, the task requests emanating from a bot are routed via a proxyware network.

"CAPTCHAs are common tools used to prevent spam and bot abuse, but the increasing use of CAPTCHA-breaking services has made CAPTCHAs less effective," Costoya said. "While online web services can block abusers' originating IPs, the rise of proxyware adoption renders this method as toothless as CAPTCHAs."

To mitigate such risks, online web services are recommended to supplement CAPTCHAs and IP blocklisting with other anti-abuse tools.


PyPI Implements Mandatory Two-Factor Authentication for Project Owners
30.5.23  Security  The Hacker News
The Python Package Index (PyPI) announced last week that every account that maintains a project on the official third-party software repository will be required to turn on two-factor authentication (2FA) by the end of the year.

"Between now and the end of the year, PyPI will begin gating access to certain site functionality based on 2FA usage," PyPI administrator Donald Stufft said. "In addition, we may begin selecting certain users or projects for early enforcement."

The enforcement also includes organization maintainers, but does not extend to every single user of the service.

The goal is to neutralize the threats posed by account takeover attacks, which an attacker can leverage to distribute trojanized versions of popular packages to poison the software supply chain and deploy malware on a large scale.

PyPI, like other open source repositories such as npm, has witnessed innumerable instances of malware and package impersonation.
Earlier this month, Fortinet FortiGuard Labs discovered over 30 Python libraries that incorporated various features to connect to arbitrary remote URLs and steal sensitive data from compromised machines.

The development comes nearly a year after PyPI made 2FA mandatory for critical project maintainers. The registry is home to 457,125 projects and 704,458 users.

According to cloud monitoring service provider Datadog, 9,580 users and 4,541 projects have been identified as critical, with 2FA enabled in total for 38,248 users to date.


GitHub Extends Push Protection to Prevent Accidental Leaks of Keys and Other Secrets
12.5.23  Security  The Hacker News
GitHub Push Protection
GitHub has announced the general availability of a new security feature called push protection, which aims to prevent developers from inadvertently leaking keys and other secrets in their code.

The Microsoft-owned cloud-based repository hosting platform, which began testing the feature a year ago, said it's also extending push protection to all public repositories at no extra cost.

The functionality is designed to work hand-in-hand with the existing secret scanning feature, which scans repositories for known secret formats to prevent their fraudulent use and avert potentially serious consequences.

"Push protection prevents secret leaks without compromising the developer experience by scanning for highly identifiable secrets before they are committed," GitHub said earlier this week.

GitHub Push Protection
"When a secret is detected in code, developers are prompted directly in their IDE or command line interface with remediation guidance to ensure that the secret is never exposed."

While push protection can be bypassed by providing a reason (e.g., testing, false positive, or acceptable risk), repository and organization administrators and security managers will be notified of such events via email.

To enable the option, users can head to Settings > Select "Code security and analysis" > Enable "Secret scanning" and "Push protection."

Push protection, since it went live in April 2022 as a beta, is estimated to have prevented 17,000 accidental secret leaks, saving more than 95,000 hours that would have otherwise been spent revoking, rotating, and remediating the compromised secrets, the company added.

The development comes nearly five months after GitHub made Secret scanning free for all public repositories, enabling users to be notified about leaked secrets in their repositories.


Google Announces New Privacy, Safety, and Security Features Across Its Services
12.5.23  Security  The Hacker News
Google unveiled a slew of new privacy, safety, and security features today at its annual developer conference, Google I/O. The tech giant's latest initiatives are aimed at protecting its users from cyber threats, including phishing attacks and malicious websites, while providing more control and transparency over their personal data.

Here is a short list of the newly introduced features -

Improved data control and transparency
Gmail Dark Web Scan Report
Effortlessly Delete Maps Search History
AI-Powered Safe Browsing
Content Safety API Expansion
About this Image
Spam View in Google Drive
Among the newly introduced features, the first on the list is improved data control and transparency. Google has unveiled an update for its Android operating system that allows users to better control location sharing through apps installed on their devices.

"Starting with location data, you will be informed in permission requests when an app shares your information with third-parties for advertising purposes," Jen Fitzpatrick, senior vice president of core systems and experiences, said.

"You can use this information to decide if you want to approve or decline location sharing for each app so you're always in control."

Android 14, besides providing granular control over the media that apps can access, brings with it a new API that allows developers to limit accessibility services from interacting with their applications and ensure that only Google Play Protect-validated applications have access to users' data.

"This adds more protection from side-loaded apps that may get installed and are trying to access sensitive data," Google's Ronnie Falcon said.

In addition, the company said it's expanding dark web reports to all users with a Gmail account in the U.S. to alert if their sensitive data is circulating on sites not indexed by search engines.

The feature, which was initially made available to Google One subscribers in March 2023, makes it possible to scan the dark web for personally identifiable information such as names, addresses, emails, phone numbers, and Social Security numbers, and seek appropriate guidance.

A third privacy-focused option launched by the tech giant is the ability to delete recent searches from Maps with a single tap as opposed to removing the Maps search history from Web & App Activity.

Other notable features include a new Safe Browsing API and a Spam view in Google Drive that's analogous to Gmail and automatically segregates potentially harmful files or abusive content, which can then be reviewed by users.

The upgrade to Safe Browsing entails a real-time API that alerts of fast-emerging low-reputation and malicious sites, thereby thwarting potential phishing attempts from threat actors who set up short-lived pages to sidestep blocklist-based checks.

The search behemoth further said it's expanding its Content Safety API to flag child sexual abuse material (CSAM) in video content, alongside debuting an "About This Image" tool that offers users with more context to ensure reliable access to trustworthy information.

"'About this Image' provides you with important context like when an image or similar images were first indexed by Google, where it may have first appeared, and where else it's been seen online like a news, social or fact checking site," Fitzpatrick said.

The updates come a week after Google enabled passwordless sign-ins using passkeys across Google Accounts on all platforms.

Last month, the tech giant also enacted a new data deletion policy that requires app developers to offer a "readily discoverable option" to users from both within an app and outside of it.

(The story has been updated after publication to highlight additional privacy and security features introduced by Google in Android 14.)


Google Introduces Passwordless Secure Sign-In with Passkeys for Google Accounts
4.5.23  Security  The Hacker News
Almost five months after Google added support for passkeys to its Chrome browser, the tech giant has begun rolling out the passwordless solution across Google Accounts on all platforms.

Passkeys, backed by the FIDO Alliance, are a more secure way to sign in to apps and websites without having to use a traditional password. This, in turn, can be achieved by simply unlocking their computer or mobile device with their biometrics (e.g., fingerprint or facial recognition) or a local PIN.

"And, unlike passwords, passkeys are resistant to online attacks like phishing, making them more secure than things like SMS one-time codes," Google noted.

Passkeys, once created, are locally stored on the device, and are not shared with any other party. This also obviates the need for setting up two-factor authentication, as it proves that "you have access to your device and are able to unlock it."

Users also have the choice of creating passkeys for every device they use to login to Google Account. That said, a passkey created on one device will be synced to all the users' other devices running the same operating system platform (i.e., Android, iOS/macOS, or Windows) and if they are signed in to the same account. Viewed in that light, passkeys are not truly interoperable.

It's worth pointing out that both Google Password Manager and iCloud Keychain use end-to-end encryption to keep the passkeys private, thereby preventing users from getting locked out should they lose access to their devices or making it easier to upgrade from one device to another.

Passwordless Secure Sign-In with Passkeys
Additionally, users can sign in on a new device or temporarily use a different device by selecting the option to "use a passkey from another device," which then uses the phone's screen lock and proximity to approve a one-time sign-in.

"The device then verifies that your phone is in proximity using a small anonymous Bluetooth message and sets up an end-to-end encrypted connection to the phone through the internet," the company explained.

"The phone uses this connection to deliver your one-time passkey signature, which requires your approval and the biometric or screen lock step on the phone. Neither the passkey itself nor the screen lock information is sent to the new device."

While this may be the "beginning of the end of the password," the company said it intends to continue to support existing login methods like passwords and two-factor authentication for the foreseeable future.

Google is also recommending that users do not create passkeys on devices that are shared with others, a move that could effectively undermine all its security protections.


Apple and Google Join Forces to Stop Unauthorized Tracking Alert System
3.5.23  Security  The Hacker News
Tracking Alert System
Apple and Google have teamed up to work on a draft industry-wide specification that's designed to tackle safety risks and alert users when they are being tracked without their knowledge or permission using devices like AirTags.

"The first-of-its-kind specification will allow Bluetooth location-tracking devices to be compatible with unauthorized tracking detection and alerts across Android and iOS platforms," the companies said in a joint statement.

While these trackers are primarily designed to keep tabs on personal belongings like keys, wallets, luggage, and other items, such devices have also been abused by bad actors for criminal or nefarious purposes, including instances of stalking, harassment, and theft.

The goal is to standardize the alerting mechanisms and minimize opportunities for misuse across Bluetooth location-tracking devices from different vendors. To that end, Samsung, Tile, Chipolo, eufy Security, and Pebblebee have all come on board.

In doing so, tracking devices manufactured by the companies are required to adhere to a set of instructions and recommendations as well as notify users of any unauthorized tracking on iOS and Android devices.

"Formalizing a set of best practices for manufacturers will allow for scalable compatibility with unwanted tracking detection technologies on various smartphone platforms and improve privacy and security for individuals," according to the spec.

"Unwanted tracking detection can both detect and alert individuals that a location tracker separated from the owner's device is traveling with them, as well as provide means to find and disable the tracker."

A crucial aspect of the proposed specification is the use of a pairing registry, which contains verifiable (but obfuscated) identity information of the owner of an accessory (e.g., phone number or email address) along with the serial number of the accessory.

Besides retaining the data for a period of minimum 25 days after the device has been unpaired (at which point it's deleted), the pairing registry is made available to law enforcement upon submitting a valid request.

In addition, the specification mandates that trackers transition from a "near-owner" mode to a "separated" mode should it be no longer near an owner's paired device for more than 30 minutes.

The companies are soliciting feedback from interested parties, following which a production implementation of the specification for unwanted tracking alerts is expected to be released sometime by the end of the year on both mobile ecosystems.

The last time Apple and Google came together, it was to devise a system-level platform that utilizes Bluetooth low energy (BLE) beacons to allow for contact tracing during the COVID-19 pandemic without using location data.


ChatGPT is Back in Italy After Addressing Data Privacy Concerns
30.4.23  Security  The Hacker News
ChatGPT
OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the data protection authority's demands ahead of April 30, 2023, deadline.

The development was first reported by the Associated Press. OpenAI's CEO, Sam Altman, tweeted, "we're excited ChatGPT is available in [Italy] again!"

The reinstatement comes following Garante's decision to temporarily block access to the popular AI chatbot service in Italy on March 31, 2023, over concerns that its practices are in violation of data protection laws in the region.

Generative AI systems like ChatGPT and Google Bard primarily rely on huge amounts of information freely available on the internet as well as the data its users provide over the course of their interactions.

OpenAI, which published a new FAQ, said it filters and removes information such as hate speech, adult content, sites that primarily aggregate personal information, and spam.

It also emphasized that it doesn't "actively seek out personal information to train our models" and that it "will not use any personal information in training information to build profiles about people, to contact them, to advertise to them, to try to sell them anything, or to sell the information itself."

That said, the company acknowledged that ChatGPT responses may include personal information about public figures and other individuals whose details are accessible on the public internet.

European users who wish to object to such processing of their personal information can do so by filling out an online form, and even exercise their right to correct, restrict, delete, or transfer their personal information contained within its training dataset.
The Garante, in a related announcement, said OpenAI also agreed to include an option to verify users' ages to confirm they are above 18 prior to gaining access to ChatGPT, or, alternatively, have obtained the consent of parents or guardians if aged between 13 and 18.

OpenAI is further expected to implement a more robust age verification system to screen minors from accessing the service, with the watchdog noting that it will continue its "fact-finding activities regarding OpenAI" as part of a task force set up by the European Data Protection Board (EDPB).

The move also follows OpenAI's introduction of a new privacy setting that allows users to turn off chat history as well as an export option to access the kinds of information stored by ChatGPT.


Google Cloud Introduces Security AI Workbench for Faster Threat Detection and Analysis
25.4.23  Security  The Hacker News
Threat Detection and Analysis
Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.

Powering the cybersecurity suite is Sec-PaLM, a specialized large language model (LLM) that's "fine-tuned for security use cases."

The idea is to take advantage of the latest advances in AI to augment point-in-time incident analysis, threat detection, and analytics to counter and prevent new infections by delivering intelligence that's trusted, relevant, and actionable.

To that end, the Security AI Workbench spans a wide range of new AI-powered tools, including VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle, to analyze potentially malicious scripts and alert customers of active breaches in their environments.

Users, like with Microsoft's GPT-4-based Security Copilot, can "conversationally search, analyze, and investigate security data" with an aim to reduce mean time-to-respond as well as quickly determine the full scope of events.

Threat Detection and Analysis
On the other hand, the Code Insight feature in VirusTotal is designed to generate natural language summaries of code snippets so as to detect and mitigate potential threats. It can also be used to flag false negatives and clear false positives.

Another key offering is Security Command Center AI, which utilizes Sec-PaLM to provide operators with "near-instant analysis of findings and possible attack paths" as well as impacted assets and recommended mitigations.

Google is also making use of machine learning models to detect and respond to API abuse and business logic attacks, wherein an adversary weaponizes a legitimate functionality to achieve a nefarious goal without triggering a security alert.
"Because Security AI Workbench is built on Google Cloud's Vertex AI infrastructure, customers control their data with enterprise-grade capabilities such as data isolation, data protection, sovereignty, and compliance support," Google Cloud's Sunil Potti said.

The development comes days after Google announced the creation of a new unit called Google DeepMind that brings together its AI research groups from DeepMind and the Brain team from Google Research to "build more capable systems more safely and responsibly."

News of Google's Security AI Workbench also follows GitLab's plans to integrate AI into its platform to help developers from leaking access tokens and avoid false positives during security testing.


Google Authenticator App Gets Cloud Backup Feature for TOTP Codes
25.4.23  Security  The Hacker News
Google Authenticator
Search giant Google on Monday unveiled a major update to its 12-year-old Authenticator app for Android and iOS with an account synchronization option that allows users to back up their time-based one-time passwords (TOTPs) to the cloud.

"This change means users are better protected from lockout and that services can rely on users retaining access, increasing both convenience and security," Google's Christiaan Brand said.

The update, which also brings a new icon to the two-factor authenticator (2FA) app, finally brings it in line with Apple's iCloud Keychain and addresses a long-standing complaint that it's tied to the device on which it's installed, making it a hassle when switching between phones.

Even worse, as Google puts it, users who lose access to their devices completely "lost their ability to sign in to any service on which they'd set up 2FA using Authenticator."

The cloud sync feature is optional, meaning users can opt to use the Authenticator app without linking it to a Google account.

That said, it's always worth keeping in mind the pitfalls associated with cloud backups, as a malicious actor with access to a Google account could leverage it to break into other online services.

The development comes days after Swiss privacy-focused company Proton, which surpassed 100 million active accounts last week, unveiled an end-to-end encrypted password manager solution called Proton Pass.
The open source and publicly auditable tool, which makes use of the bcrypt password hashing function and a hardened version of the Secure Remote Password (SRP) protocol for authentication, also comes with 2FA integration.


Google Launches New Cybersecurity Initiatives to Strengthen Vulnerability Management
14.4.23  Security  The Hacker News
Google on Thursday outlined a set of initiatives aimed at improving the vulnerability management ecosystem and establishing greater transparency measures around exploitation.

"While the notoriety of zero-day vulnerabilities typically makes headlines, risks remain even after they're known and fixed, which is the real story," the company said in an announcement. "Those risks span everything from lag time in OEM adoption, patch testing pain points, end user update issues and more."

Security threats also stem from incomplete patches applied by vendors, with a chunk of the zero-days exploited in the wild turning out to be variants of previously patched vulnerabilities.

Mitigating such risks requires addressing the root cause of the vulnerabilities and prioritizing modern secure software development practices to eliminate entire classes of threats and block potential attack avenues.

Taking these factors into consideration, Google said it's forming a Hacking Policy Council along with Bugcrowd, HackerOne, Intel, Intigriti, and Luta Security to "ensure new policies and regulations support best practices for vulnerability management and disclosure."

The company further emphasized that it's committing to publicly disclose incidents when it finds evidence of active exploitation of vulnerabilities across its product portfolio.

Lastly, the tech giant said it's instituting a Security Research Legal Defense Fund to provide seed funding for legal representation for individuals engaging in good-faith research to find and report vulnerabilities in a manner that advances cybersecurity.

The goal, the company noted, is to escape the "doom loop" of vulnerability patching and threat mitigation by "focusing on the fundamentals of secure software development, good patch hygiene, and designing for security and ease of patching from the start."
Google's latest security push speaks to the need for looking beyond zero-days by making exploitation difficult in the first place, driving patch adoption for known vulnerabilities in a timely manner, setting up policies to address product life cycles, and making users aware when products are actively exploited.

It also serves to highlight the importance of applying secure-by-design principles during all phases of the software development lifecycle.
The disclosure comes as Google launched a free API service called deps.dev API in a bid to secure the software supply chain by providing access to security metadata and dependency information for over 50 million versions of five million open source packages found on the Go, Maven, PyPI, npm, and Cargo repositories.

In a related development, Google's cloud division has also announced the general availability of the Assured Open Source Software (Assured OSS) service for Java and Python ecosystems.


ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes
14.4.23  Security  The Hacker News
OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are "safe and secure."

To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to $20,000 for exceptional discoveries."

It's worth noting that the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that "addressing these issues often involves substantial research and a broader approach."

Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information beyond what's necessary to highlight the problem.

"Please note that authorized testing does not exempt you from all of OpenAI's terms of service," the company cautioned. "Abusing the service may result in rate limiting, blocking, or banning."

What's in scope, however, are defects in OpenAI APIs, ChatGPT (including plugins), third-party integrations, public exposure of OpenAI API keys, and any of the domains operated by the company.

The development comes in response to OpenAI patching account takeover and data exposure flaws in the platform, prompting Italian data protection regulators to take a closer look at the platform.

Italian Data Protection Authority Proposes Measures to Lift ChatGPT Ban#
The Garante, which imposed a temporary ban on ChatGPT on March 31, 2023, has since outlined a set of measures the Microsoft-backed firm will have to agree to implement by the end of the month in order for the suspension to be lifted.

"OpenAI will have to draft and make available, on its website, an information notice describing the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects," the Garante said.
Additionally, the information notice should be readily available for Italian users before signing up for the service. Users will also need to be required to declare they are over the age of 18.

OpenAI has also been ordered to implement an age verification system by September 30, 2023, to filter out users aged below 13 and have provisions in place to seek parental consent for users aged 13 to 18. The company has been given time till May 31 to submit a plan for the age-gating system.

As part of efforts to exercise data rights, both users and non-users of the service should be able to request for "rectification of their personal data" in cases where it's incorrectly generated by the service, or alternatively, erase the data if corrections are technically infeasible.

Non-users, per the Garante, should further be provided with easily accessible tools to object to their personal data being processed by OpenAI's algorithms. The company is also expected to run an advertising campaign by May 15, 2023, to "inform individuals on use of their personal data for training algorithms."

Update: Spain Opens Probe into OpenAI ChatGPT#
The Spanish Data Protection Authority (AEPD), on April 13, 2023, said it has initiated a preliminary investigation into OpenAI ChatGPT service for suspected breaches of E.U. data protection laws.

The European Data Protection Board (EDPB), in a related announcement, said it's launching a "dedicated task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities."


Microsoft Tightens OneNote Security by Auto-Blocking 120 Risky File Extensions
4.4.23  Security  The Hacker News
Microsoft has announced plans to automatically block embedded files with "dangerous extensions" in OneNote following reports that the note-taking service is being increasingly abused for malware delivery.

Up until now, users were shown a dialog warning them that opening such attachments could harm their computer and data, but it was possible to dismiss the prompt and open the files.

That's going to change going forward. Microsoft said it intends to prevent users from directly opening an embedded file with a dangerous extension and display the message: "Your administrator has blocked your ability to open this file type in OneNote."

The update is expected to start rolling out with Version 2304 later this month and only impacts OneNote for Microsoft 365 on devices running Windows. It does not affect other platforms, including macOS, Android, and iOS, as well as OneNote versions available on the web and for Windows 10.

"By default, OneNote blocks the same extensions that Outlook, Word, Excel, and PowerPoint do," Microsoft said. "Malicious scripts and executables can cause harm if clicked by the user. If extensions are added to this allow list, they can make OneNote and other applications, such as Word and Excel, less secure."

The list of 120 extensions are as follows -

.ade, .adp, .app, .application, .appref-ms, .asp, .aspx, .asx, .bas, .bat, .bgi, .cab, .cer, .chm, .cmd, .cnt, .com, .cpl, .crt, .csh, .der, .diagcab, .exe, .fxp, .gadget, .grp, .hlp, .hpj, .hta, .htc, .inf, .ins, .iso, .isp, .its, .jar, .jnlp, .js, .jse, .ksh, .lnk, .mad, .maf, .mag, .mam, .maq, .mar, .mas, .mat, .mau, .mav, .maw, .mcf, .mda, .mdb, .mde, .mdt, .mdw, .mdz, .msc, .msh, .msh1, .msh2, .mshxml, .msh1xml, .msh2xml, .msi, .msp, .mst, .msu, .ops, .osd, .pcd, .pif, .pl, .plg, .prf, .prg, .printerexport, .ps1, .ps1xml, .ps2, .ps2xml, .psc1, .psc2, .psd1, .psdm1, .pst, .py, .pyc, .pyo, .pyw, .pyz, .pyzw, .reg, .scf, .scr, .sct, .shb, .shs, .theme, .tmp, .url, .vb, .vbe, .vbp, .vbs, .vhd, .vhdx, .vsmacros, .vsw, .webpnp, .website, .ws, .wsc, .wsf, .wsh, .xbap, .xll, and .xnk

Users who opt to still open the embedded file can do so by first saving the file locally to their device and then opening it from there.

The development comes as Microsoft's decision to block macros by default in Office files downloaded from the internet spurred threat actors to switch to OneNote attachments to deliver malware via phishing attacks.

According to cybersecurity firm Trellix, the number of malicious OneNote samples has been gradually increasing since December 2022, before ramping up in February 2023.


Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders
29.3.23  Security  The Hacker News
Security Copilot Tool
Microsoft on Tuesday unveiled Security Copilot in limited preview, marking its continued quest to embed AI-oriented features in an attempt to offer "end-to-end defense at machine speed and scale."

Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a security analysis tool that enables cybersecurity analysts to quickly respond to threats, process signals, and assess risk exposure.

To that end, it collates insights and data from various products like Microsoft Sentinel, Defender, and Intune to help security teams better understand their environment; determine if they are susceptible to known vulnerabilities and exploits; identify ongoing attacks, their scale, and receive remediation instructions; and summarize incidents.

Users, for instance, can ask Security Copilot about suspicious user logins over a specific time period, or even employ it to create a PowerPoint presentation outlining an incident and its attack chain. It can also accept files, URLs, and code snippets for analysis.

Redmond said its proprietary security-specific model is informed by more than 65 trillion daily signals, emphasizing that the tool is privacy-compliant and customer data "is not used to train the foundation AI models."

"Today the odds remain stacked against cybersecurity professionals," Vasu Jakkal, Microsoft's corporate vice president of Security, Compliance, Identity, and Management, pointed out.

"Too often, they fight an asymmetric battle against prolific, relentless and sophisticated attackers. To protect their organizations, defenders must respond to threats that are often hidden among noise."
Security Copilot is the latest AI push from Microsoft, which has been steadily integrating generative AI features into its software offerings over the past two months, including Bing, Edge browser, GitHub, LinkedIn, and Skype.

The development also comes weeks after the tech giant launched Microsoft 365 Copilot, integrating AI capabilities within its suite of productivity and enterprise apps such as Office, Outlook, and Teams.