AI  2024  2023 


U.S., U.K., and Global Partners Release Secure AI System Development Guidelines
27.11.23  AI  The Hacker News

The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems.

"The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said.

The goal is to increase cyber security levels of AI and help ensure that the technology is designed, developed, and deployed in a secure manner, the National Cyber Security Centre (NCSC) added.

The guidelines also build upon the U.S. government's ongoing efforts to manage the risks posed by AI by ensuring that new tools are tested adequately before public release, there are guardrails in place to address societal harms, such as bias and discrimination, and privacy concerns, and setting up robust methods for consumers to identify AI-generated material.

The commitments also require companies to commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems through a bug bounty system so that they can be found and fixed swiftly.

The latest guidelines "help developers ensure that cyber security is both an essential precondition of AI system safety and integral to the development process from the outset and throughout, known as a 'secure by design' approach," NCSC said.

This encompasses secure design, secure development, secure deployment, and secure operation and maintenance, covering all significant areas within the AI system development life cycle, requiring that organizations model the threats to their systems as well as safeguard their supply chains and infrastructure.

The aim, the agencies noted, is to also combat adversarial attacks targeting AI and machine learning (ML) systems that aim to cause unintended behavior in various ways, including affecting a model's classification, allowing users to perform unauthorized actions, and extracting sensitive information.

"There are many ways to achieve these effects, such as prompt injection attacks in the large language model (LLM) domain, or deliberately corrupting the training data or user feedback (known as 'data poisoning')," NCSC noted.


New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks
26.7.23 
AI  The Hacker News
FraudGPT
Following the footsteps of WormGPT, threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed FraudGPT on various dark web marketplaces and Telegram channels.

"This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan said in a report published Tuesday.

The cybersecurity firm said the offering has been circulating since at least July 22, 2023, for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year).

"If your [sic] looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries then look no further!," claims the actor, who goes by the online alias CanadianKingpin.

The author also states that the tool could be used to write malicious code, create undetectable malware, find leaks and vulnerabilities, and that there have been more than 3,000 confirmed sales and reviews. The exact large language model (LLM) used to develop the system is currently not known.

FraudGPT
The development comes as the threat actors are increasingly riding on the advent of OpenAI ChatGPT-like AI tools to concoct new adversarial variants that are explicitly engineered to promote all kinds of cybercriminal activity sans any restrictions.
Such tools could act as a launchpad for novice actors looking to mount convincing phishing and business email compromise (BEC) attacks at scale, leading to the theft of sensitive information and unauthorized wire payments.

"While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn't a difficult feat to reimplement the same technology without those safeguards," Krishnan noted.

"Implementing a defense-in-depth strategy with all the security telemetry available for fast analytics has become all the more essential to finding these fast-moving threats before a phishing email can turn into ransomware or data exfiltration."