DATE | NAME |
Info | CATEG. |
WEB |
| 7.2.26 | AI Agent Identity Management: A New Security Control Plane for CISOs | Autonomous AI agents are creating a new identity blind spot as they operate outside traditional IAM controls. Token Security shows why managing the full lifecycle of AI agent identities is becoming a critical CISO priority. | AI | |
| 7.2.26 | UK privacy watchdog probes Grok over AI-generated sexual images | The United Kingdom's data protection authority launched a formal investigation into X and its Irish subsidiary over reports that the Grok AI assistant was used to generate nonconsensual sexual images. | AI | |
| 7.2.26 | French prosecutors raid X offices, summon Musk over Grok deepfakes | French prosecutors have raided X's offices in Paris on Tuesday as part of a criminal investigation into the platform's Grok AI tool, widely used to generate sexually explicit images. | AI | |
| 7.2.26 | Malicious MoltBot skills used to push password-stealing malware | More than 230 malicious packages for the personal AI assistant OpenClaw (formerly known as Moltbot and ClawdBot) have been published in less than a week on the tool's official registry and on GitHub. | AI | |
| 7.2.26 | U.S. convicts ex-Google engineer for sending AI tech data to China | A U.S. federal jury has convicted Linwei Ding, a former software engineer at Google, for stealing AI supercomputer data from his employer and secretly sharing it with Chinese tech firms. | AI | |
| 6.2.26 | Claude Opus 4.6 Finds 500+ High-Severity Flaws Across Major Open-Source Libraries | Artificial intelligence (AI) company Anthropic revealed that its latest large language model (LLM), Claude Opus 4.6, has found more than 500 previously | AI | The Hacker News |
| 3.2.26 | Viral Moltbot AI assistant raises concerns over data security | Security researchers are warning of insecure deployments in enterprise environments of the Moltbot (formerly Clawdbot) AI assistant, which can lead to leaking API keys, OAuth tokens, conversation history, and credentials. | AI | |
| 3.2.26 | AI Is Rewriting Compliance Controls and CISOs Must Take Notice | AI agents are now executing regulated actions, reshaping how compliance controls actually work. Token Security explains why CISOs must rethink identity, access, and auditability as AI becomes a digital employee. | AI | |
| 3.2.26 | Hackers hijack exposed LLM endpoints in Bizarre Bazaar operation | A malicious campaign is actively targeting exposed LLM (Large Language Model) service endpoints to commercialize unauthorized access to AI infrastructure. | AI | |
| 3.2.26 | Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox | Mozilla on Monday announced a new controls section in its Firefox desktop browser settings that allows users to completely turn off generative artificial | AI | The Hacker News |
| 31.1.26 | Researchers Uncover Chrome Extensions Abusing Affiliate Links and Stealing ChatGPT Access | Cybersecurity researchers have discovered malicious Google Chrome extensions that come with capabilities to hijack affiliate links, steal data, and | AI | The Hacker News |
| 30.1.26 | Researchers Find 175,000 Publicly Exposed Ollama AI Servers Across 130 Countries | A new joint investigation by SentinelOne SentinelLABS, and Censys has revealed that the open-source artificial intelligence (AI) deployment has | AI | The Hacker News |
| 29.1.26 | Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware | Cybersecurity researchers have flagged a new malicious Microsoft Visual Studio Code (VS Code) extension for Moltbot (formerly Clawdbot) on the | AI | The Hacker News |
| 27.1.26 | Malicious VS Code AI Extensions with 1.5 Million Installs Steal Developer Source Code | Cybersecurity researchers have discovered two malicious Microsoft Visual Studio Code (VS Code) extensions that are advertised as artificial intelligence | AI | The Hacker News |
| 26.1.26 | Winning Against AI-Based Attacks Requires a Combined Defensive Approach | If there's a constant in cybersecurity, it's that adversaries are always innovating. The rise of offensive AI is transforming attack strategies and | AI | The Hacker News |
| 26.1.26 | Konni Hackers Deploy AI-Generated PowerShell Backdoor Against Blockchain Developers | The North Korean threat actor known as Konni has been observed using PowerShell malware generated using artificial intelligence (AI) tools to target | AI | The Hacker News |
| 25.1.26 | Malicious AI extensions on VSCode Marketplace steal developer data | Two malicious extensions in Microsoft's Visual Studio Code (VSCode) Marketplace that were collectively installed 1.5 million times, exfiltrate developer data to China-based servers. | AI | |
| 25.1.26 | What an AI-Written Honeypot Taught Us About Trusting Machines | AI-generated code can introduce subtle security flaws when teams over-trust automated output. Intruder shows how an AI-written honeypot introduced hidden vulnerabilities that were exploited in attacks. | AI | |
| 25.1.26 | Curl ending bug bounty program after flood of AI slop reports | The developer of the popular curl command-line utility and library announced that the project will end its HackerOne security bug bounty program at the end of this month, after being overwhelmed by low-quality AI-generated vulnerability reports. | AI | |
| 25.1.26 | Microsoft updates Notepad and Paint with more AI features | Microsoft is rolling out new artificial intelligence features with the latest updates to the Notepad and Paint apps for Windows 11 Insiders. | AI | |
| 25.1.26 | Chainlit AI framework bugs let hackers breach cloud environments | Two high-severity vulnerabilities in Chainlit, a popular open-source framework for building conversational AI applications, allow reading any file on the server and leak sensitive information. | AI | |
| 25.1.26 | Gemini AI assistant tricked into leaking Google Calendar data | Using only natural language instructions, researchers were able to bypass Google Gemini's defenses against malicious prompt injection and create misleading events to leak private Calendar data. | AI | |
| 22.1.26 | Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs | Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal | AI | The Hacker News |
| 22.1.26 | Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution | A set of three security vulnerabilities has been disclosed in mcp-server-git , the official Git Model Context Protocol ( MCP ) server maintained by Anthropic, | AI | The Hacker News |
| 20.1.26 | Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites | Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to | AI | The Hacker News |
| 18.1.26 | OpenAI to Show Ads in ChatGPT for Logged-In U.S. Adults on Free and Go Plans | OpenAI on Friday said it would start showing ads in ChatGPT to logged-in adult U.S. users in both the free and ChatGPT Go tiers in the coming weeks, as the | AI | The Hacker News |
| 14.1.26 | ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation | ServiceNow has disclosed details of a now-patched critical security flaw impacting its ServiceNow artificial intelligence (AI) Platform that could enable an unauthenticated user to | AI | The Hacker News |
| 11.1.26 | Hackers target misconfigured proxies to access paid LLM services | Threat actors are systematically hunting for misconfigured proxy servers that could provide access to commercial large language model (LLM) services. | AI | |
| 10.1.26 | New GoBruteforcer attack wave targets crypto, blockchain projects | A new wave of GoBruteforcer botnet malware attacks is targeting databases of cryptocurrency and blockchain projects on exposed servers believed to be configured using AI-generated examples. | AI | |
| 10.1.26 | In 2026, Hackers Want AI: Threat Intel on Vibe Hacking & HackGPT | Cybercriminals are increasingly using AI to lower the barrier to entry for fraud and hacking, shifting from skill-based to AI-assisted attacks known as "vibe hacking." Flare examines how underground forums promote AI tools, jailbreak techniques, and so-called "Hacking-GPT" services that promise ease rather than technical mastery. | AI | |
| 9.1.26 | How generative AI accelerates identity attacks against Active Directory | Generative AI is accelerating password attacks against Active Directory, making credential abuse faster and more effective. Specops Software explains how AI-driven cracking techniques exploit weak and predictable AD passwords. | AI | |
| 9.1.26 | Are Copilot prompt injection flaws vulnerabilities or AI limits? | Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems. | AI | |
| 9.1.26 | Agentic AI Is an Identity Problem and CISOs Will Be Accountable for the Outcome | As agentic AI adoption accelerates, identity is emerging as the primary security challenge. Token Security explains why AI agents behave like a new class of identity and why CISOs must manage their access, lifecycle, and risk. | AI | |
| 8.1.26 | OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls | Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about | AI | The Hacker News |
| 7.1.26 | Two Chrome Extensions Caught Stealing ChatGPT and DeepSeek Chats from 900,000 Users | Cybersecurity researchers have discovered two new malicious extensions on the Chrome Web Store that are designed to exfiltrate OpenAI ChatGPT and DeepSeek conversations | AI | The Hacker News |
| 3.1.26 | The Real-World Attacks Behind OWASP Agentic AI Top 10 | OWASP's new Agentic AI Top 10 highlights real-world attacks already targeting autonomous AI systems, from goal hijacking to malicious MCP servers. Koi Security breaks down real-world incidents behind multiple categories, including two cases cited by OWASP, showing how agent tools and runtime behavior are being abused. | AI | |