AI blog - 2026  2025  2024

AI blog  APT blog  Attack blog  BigBrother blog  BotNet blog  Cyber blog  Cryptocurrency blog  Exploit blog  Hacking blog  ICS blog  Incident blog  IoT blog  Malware blog  OS Blog  Phishing blog  Ransom blog  Safety blog  Security blog  Social blog  Spam blog  Vulnerebility blog

DATE

NAME

Info

CATEG.

WEB

1.2.26 Generative AI and cybersecurity: What Sophos experts expect in 2026 AI has dominated cybersecurity headlines for years, but as we enter 2026, the conversation is shifting from hype to hard realities. Across incident response, threat intelligence, and security operations, Sophos experts see clearer signals of where AI is truly making an impact. For IT teams already stretched thin, this isn’t theoretical — it’s reshaping daily decisions. AI blog SOPHOS
1.2.26 The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time Imagine visiting a webpage that looks perfectly safe. It has no malicious code, no suspicious links. Yet, within seconds, it transforms into a personalized phishing page. AI blog Palo Alto
1.2.26 Children and chatbots: What parents should know As children turn to AI chatbots for answers, advice, and companionship, questions emerge about their safety, privacy, and emotional development AI blog Eset
24.1.26 Watering Hole Attack Targets EmEditor Users with Information-Stealing Malware TrendAI™ Research provides a technical analysis of a compromised EmEditor installer used to deliver multistage malware that performs a range of malicious actions. AI blog Trend Micro
24.1.26 Introducing ÆSIR: Finding Zero-Day Vulnerabilities at the Speed of AI TrendAI™’s ÆSIR platform combines AI automation with expert oversight to discover zero-day vulnerabilities in AI infrastructure – 21 CVEs across NVIDIA, Tencent, and MLflow since mid-2025. AI blog Trend Micro
24.1.26 KONNI Adopts AI to Generate PowerShell Backdoors Check Point Research (CPR) is tracking a phishing campaign linked to a North Korea–aligned threat actor known as KONNI. AI blog

CHECKPOINT

17.1.26 Remote Code Execution With Modern AI/ML Formats and Libraries We identified vulnerabilities in three open-source artificial intelligence/machine learning (AI/ML) Python libraries published by Apple, Salesforce and NVIDIA on their GitHub repositories. Vulnerable versions of these libraries allow for remote code execution (RCE) when a model file with malicious metadata is loaded. AI blog Palo Alto
17.1.26 When AI Gets Bullied: How Agentic Attacks Are Replaying Human Social Engineering December closed out 2025 with a clear signal that AI risk, capability, and governance are evolving faster than ever. Updated CASI and ARS leaderboards showed a notable shift at the top, with GPT-5.2 delivering an 11-point security improvement over GPT-5.1, while NVIDIA’s latest model demonstrated that strong performance and efficiency are increasingly attainable outside the traditional hyperscaler ecosystem. AI blog F5
10.1.26 Winning the AI War: Why Preemptive Cyber Defense is the Only Viable Countermeasure for CISOs The escalation of AI-driven cyber threats has fundamentally broken the traditional security lifecycle. For decades, the industry has operated on a reactive cadence: an attack occurs, indicators are gathered, and defenses are updated. This model assumes that defenders have time to react. AI blog Silent Push
10.1.26 The Truman Show Scam: Trapped in an AI-Generated Reality Executive Summary The OPCOPRO “Truman Show” operation is a fully synthetic, AI‑powered investment scam that ... AI blog CHECKPOINT
10.1.26 Securing Vibe Coding Tools: Scaling Productivity Without Scaling Risk The promise of AI-assisted development, or “vibe coding,” is undeniable: unprecedented speed and productivity for development teams. In a landscape defined by complex cloud-native architectures and intense demand for new software, this force multiplier is rapidly becoming standard practice. AI blog Palo Alto

22.12.24

Link Trap: GenAI Prompt Injection Attack Prompt injection exploits vulnerabilities in generative AI to manipulate its behavior, even without extensive permissions. This attack can expose sensitive data, making awareness and preventive measures essential. Learn how it works and how to stay protected. AI blog Trend Micro

21.12.24

Philip Torr: AI to the people | Starmus Highlights We’re on the cusp of a technological revolution that is poised to transform our lives – and we hold the power to shape its impact AI blog

Eset

2.11.24

Deceptive Delight: Jailbreak LLMs Through Camouflage and Distraction

This article introduces a simple and straightforward technique for jailbreaking that we call Deceptive Delight. Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content. AI blog Palo Alto

2.11.24

How LLMs could help defenders write better and faster detection Can LLM tools actually help defenders in the cybersecurity industry write more effective detection content? Read the full research AI blog Cisco Blog

28.9.24

Evolved Exploits Call for AI-Driven ASRM + XDR AI-driven insights for managing emerging threats and minimizing organizational risk AI blog

Trend Micro

21.9.24

Identifying Rogue AI This is the third blog in an ongoing series on Rogue AI. Keep following for more technical guidance, case studies, and insights AI blog

Trend Micro

21.9.24

AI security bubble already springing leaks Artificial intelligence is just a spoke in the wheel of security – an important spoke but, alas, only one AI blog

Eset

31.8.24

AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI This issue of AI Pulse is all about agentic AI: what it is, how it works, and why security needs to be baked in from the start to prevent agentic AI systems from going rogue once they’re deployed. AI blog

Trend Micro

31.8.24

Unmasking ViperSoftX: In-Depth Defense Strategies Against AutoIt-Powered Threats

Explore in-depth defense strategies against ViperSoftX with the Trellix suite, and unpack why AutoIt is an increasingly popular tool for malware authors

AI blog

Trelix

24.8.24

Confidence in GenAI: The Zero Trust Approach Enterprises have gone all-in on GenAI, but the more they depend on AI models, the more risks they face. Trend Vision One™ – Zero Trust Secure Access (ZTSA) – AI Service Access bridges the gap between access control and GenAI services to protect the user journey. AI blog

Trend Micro

24.8.24

Securing the Power of AI, Wherever You Need It Explore how generative AI is transforming cybersecurity and enterprise resilience AI blog

Trend Micro

24.8.24

Rogue AI is the Future of Cyber Threats This is the first blog in a series on Rogue AI. Later articles will include technical guidance, case studies and more. AI blog

Trend Micro

17.8.24

Harnessing LLMs for Automating BOLA Detection This post presents our research on a methodology we call BOLABuster, which uses large language models (LLMs) to detect broken object level authorization (BOLA) vulnerabilities. By automating BOLA detection at scale, we will show promising results in identifying these vulnerabilities in open-source projects. AI blog Palo Alto

3.8.24

AI and automation reducing breach costs – Week in security with Tony Anscombe Organizations that leveraged AI and automation in security prevention cut the cost of a data breach by US$2.22 million compared to those that didn't deploy these technologies, according to IBM AI blog

Eset

3.8.24

Beware of fake AI tools masking very real malware threats Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants AI blog

Eset

27.7.24

Vulnerabilities in LangChain Gen AI

Researchers from Palo Alto Networks have identified two vulnerabilities in LangChain, a popular open source generative AI framework with over 81,000 stars on GitHub:

AI blog

Palo Alto

13.7.24

Declare your AIndependence: block AI bots, scrapers and crawlers with a single click To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier... AI blog Cloudflare

13.7.24

The Top 10 AI Security Risks Every Business Should Know With every week bringing news of another AI advance, it’s becoming increasingly important for organizations to understand the risks before adopting AI tools. This look at 10 key areas of concern identified by the Open Worldwide Application Security Project (OWASP) flags risks enterprises should keep in mind through the back half of the year. AI blog Trend Micro

13.7.24

The Contrastive Credibility Propagation Algorithm in Action: Improving ML-powered Data Loss Prevention The Contrastive Credibility Propagation (CCP) algorithm is a novel approach to semi-supervised learning (SSL) developed by AI researchers at Palo Alto Networks to improve model task performance with imbalanced and noisy labeled and unlabeled data. AI blog Palo Alto

6.7.24

AI in the workplace: The good, the bad, and the algorithmic While AI can liberate us from tedious tasks and even eliminate human error, it's crucial to remember its weaknesses and the unique capabilities that humans bring to the table AI blog Eset
29.6.24 ICO Scams Leverage 2024 Olympics to Lure Victims, Use AI for Fake Sites In this blog we uncover threat actors using the 2024 Olympics to lure victims into investing in an initial coin offering (ICO). Similar schemes have been found to use AI-generated images for their fake ICO websites. AI blog Trend Micro
29.6.24 AI Coding Companions 2024: AWS, GitHub, Tabnine + More AI coding companions are keeping pace with the high-speed evolution of generative AI overall, continually refining and augmenting their capabilities to make software development faster and easier than ever before. This blog looks at how the landscape is changing and key features of market-leading solutions from companies like AWS, GitHub, and Tabnine. AI blog Trend Micro
15.6.24 Explore AI-Driven Cybersecurity with Trend Micro, Using NVIDIA NIM Discover Trend Micro's integration of NVIDIA NIM to deliver an AI-driven cybersecurity solution for next-generation data centers. Engage with experts, explore demos, and learn strategies for securing AI data centers and optimizing cloud performance. AI blog Trend Micro

1.6.24

AI in HR: Is artificial intelligence changing how we hire employees forever? Much digital ink has been spilled on artificial intelligence taking over jobs, but what about AI shaking up the hiring process in the meantime? AI blog Eset

1.6.24

ESET World 2024: Big on prevention, even bigger on AI What is the state of artificial intelligence in 2024 and how can AI level up your cybersecurity game? These hot topics and pressing questions surrounding AI were front and center at the annual conference. AI blog Eset

25.5.24

What happens when AI goes rogue (and how to stop it) As AI gets closer to the ability to cause physical harm and impact the real world, “it’s complicated” is no longer a satisfying response AI blog Eset

11.5.24

RSA Conference 2024: AI hype overload Can AI effortlessly thwart all sorts of cyberattacks? Let’s cut through the hyperbole surrounding the tech and look at its actual strengths and limitations. AI blog Eset
6.4.24 BEYOND IMAGINING – HOW AI IS ACTIVELY USED IN ELECTION CAMPAIGNS AROUND THE WORLD Deepfake materials (convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates) are often disseminated shortly before election dates to limit the opportunity for fact-checkers to respond. Regulations which ban political discussion on mainstream media in the hours leading up to elections, allow unchallenged fake news to dominate the airwaves. AI blog Checkpoint
2.3.24 Deceptive AI content and 2024 elections – Week in security with Tony Anscombe As the specter of AI-generated disinformation looms large, tech giants vow to crack down on fabricated content that could sway voters and disrupt elections taking place around the world this year AI blog Eset
18.2.24 All eyes on AI | Unlocked 403: A cybersecurity podcast Artificial intelligence is on everybody’s lips these days, but there are also many misconceptions about what AI actually is and isn’t. We unpack the basics and examine AI's broader implications. AI blog Eset
4.2.24 Break the fake: The race is on to stop AI voice cloning scams As AI-powered voice cloning turbocharges imposter scams, we sit down with ESET’s Jake Moore to discuss how to hang up on ‘hi-fi’ scam calls – and what the future holds for deepfake detection AI blog Eset

14.1.24

Love is in the AI: Finding love online takes on a whole new meaning Is AI companionship the future of not-so-human connection – and even the cure for loneliness? AI blog Eset