AI blog 2024 2023 2022 2021 2020 2019 2018
AI blog APT blog Attack blog BigBrother blog BotNet blog Cyber blog Cryptocurrency blog Exploit blog Hacking blog ICS blog Incident blog IoT blog Malware blog OS Blog Phishing blog Ransom blog Safety blog Security blog Social blog Spam blog Vulnerebility blog 2024 2023
DATE | NAME | Info | CATEG. | WEB |
| 8.11.25 | Introduction Over the past few months, we identified an emerging online threat that combines fraud, ... | AI blog | CHECKPOINT | |
| 8.11.25 | Insiders, AI, and data sprawl converge: essential insights from the 2025 Data Security Landscape report | Data security is at a critical inflection point. Organizations today are struggling with explosive data growth, sprawling IT environments, persistent insider risks, and the adoption of generative AI (GenAI). What’s more, the rapid emergence of AI agents is giving rise to a new, more complex agentic workspace, where both humans and agents interact with sensitive data. | AI blog | PROOFPOINT |
| 8.11.25 | SesameOp: Novel backdoor uses OpenAI Assistants API for command and control | Microsoft Incident Response – Detection and Response Team (DART) researchers uncovered a new backdoor that is notable for its novel use of the OpenAI Assistants Application Programming Interface (API) as a mechanism for command-and-control (C2) communications. | AI blog | Microsoft blog |
| 8.11.25 | Beating XLoader at Speed: Generative AI as a Force Multiplier for Reverse Engineering | XLoader remains one of the most challenging malware families to analyze. Its code decrypts only at runtime and is protected by multiple layers of encryption, each locked with a different key hidden somewhere else in the binary. Even sandboxes are no help: evasions block malicious branches, and the real C2 (command and control) domains are buried among dozens of fakes. With new versions released faster than researchers can investigate, analysis is almost always a (losing) race against time. | AI blog | CHECKPOINT |
| 8.11.25 | Do robots dream of secure networking? Teaching cybersecurity to AI systems | This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. | AI blog | CISCO TALOS |
| 1.11.25 | From Human-Led to AI-Driven: Why Agentic AI Is Redefining Cybersecurity Strategy | Agentic AI marks the next leap in cybersecurity—autonomous systems that detect, decide, and act in real time, transforming how organizations defend against threats. | AI blog | Cyble |
| 1.11.25 | AI Security: NVIDIA BlueField Now with Vision One™ | Launching at NVIDIA GTC 2025 - Transforming AI Security with Trend Vision One™ on NVIDIA BlueField | AI blog | Trend Micro |
| 1.11.25 | When AI Agents Go Rogue: Agent Session Smuggling Attack in A2A Systems | We discovered a new attack technique, which we call agent session smuggling. This technique allows a malicious AI agent to exploit an established cross-agent communication session to send covert instructions to a victim agent. | AI blog | Palo Alto |
| 18.10.25 | Crystal Ball Series : Consolidated Instalments | CRYSTAL BALL SERIES IN THIS INSTALMENT WE EXPLORE AI ADVANCEMENTS 2025 AND BEYOND Digital Twin Cybersecurity Neurosymbolic Al Deepfakes: A new era | AI blog | Cyfirma |
| 18.10.25 | AI-aided malvertising: Exploiting a chatbot to spread scams | Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it. | AI blog | Eset |
|
11.10.25 |
Block ransomware proliferation and easily restore files with AI in Google Drive | Ransomware remains one of the most damaging cyber threats facing organizations today. These attacks can lead to substantial financial losses, operational downtime, and data compromise, impacting organizations of all sizes and industries, including healthcare, retail, education, manufacturing, and government. | AI blog | Google Threat Intelligence |
|
11.10.25 |
Operations with Untamed LLMs | Starting in June 2025, Volexity detected a series of spear phishing campaigns targeting several customers and their users in North America, Asia, and Europe. The initially observed campaigns were tailored | AI blog | VOLEXITY |
|
11.10.25 |
How Your AI Chatbot Can Become a Backdoor | In this post of THE AI BREACH, learn how your Chatbot can become a backdoor. | AI blog | Trend Micro |
|
11.10.25 |
Weaponized AI Assistants & Credential Thieves | Learn the state of AI and the NPM ecosystem with the recent s1ngularity' weaponized AI for credential theft. | AI blog | Trend Micro |
|
11.10.25 |
When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory | This article presents a proof of concept (PoC) that demonstrates how adversaries can use indirect prompt injection to silently poison the long-term memory of an AI Agent. We use Amazon Bedrock Agent for this demonstration. | AI blog | Palo Alto |
| 27.9.25 | AI-Powered App Exposes User Data, Creates Risk of Supply Chain Attacks | Trend™ Research’s analysis of Wondershare RepairIt reveals how the AI-driven app exposed sensitive user data due to unsecure cloud storage practices and hardcoded credentials, creating risks of model tampering and supply chain attacks. | AI blog | Trend Micro |
| 27.9.25 | Domino Effect: How One Vendor's AI App Breach Toppled Giants | A single AI chatbot breach at Salesloft-Drift exposed data from 700+ companies, including security leaders. The attack shows how AI integrations expand risk, and why controls like IP allow-listing, token security, and monitoring are critical. | AI blog | Trend Micro |
| 27.9.25 | This Is How Your LLM Gets Compromised | Poisoned data. Malicious LoRAs. Trojan model files. AI attacks are stealthier than ever—often invisible until it’s too late. Here’s how to catch them before they catch you. | AI blog | Trend Micro |
| 27.9.25 | DeceptiveDevelopment: From primitive crypto theft to sophisticated AI-based deception | Malware operators collaborate with covert North Korean IT workers, posing a threat to both headhunters and job seekers | AI blog | Eset |
| 20.9.25 | EvilAI Operators Use AI-Generated Code and Fake Apps for Far-Reaching Attacks | Combining AI-generated code and social engineering, EvilAI operators are executing a rapidly expanding campaign, disguising their malware as legitimate applications to bypass security, steal credentials, and persistently compromise organizations worldwide. | AI blog | Trend Micro |
| 20.9.25 | How AI-Native Development Platforms Enable Fake Captcha Pages | Cybercriminals are abusing AI-native platforms like Vercel, Netlify, and Lovable to host fake captcha pages that deceive users, bypass detection, and drive phishing campaigns. | AI blog | Trend Micro |
| 13.9.25 | Echoleak- Send a prompt , extract secret from Copilot AI!( CVE-2025-32711) | Introduction: What if your Al assistant wasn’t just helping you – but quietly helping someone else too? A recent zero-click exploit known as EchoLeak revealed how Microsoft 365 Copilot could be manipulated to exfiltrate sensitive information – without the... | AI blog | Seqrite |
| 6.9.25 | Hexstrike-AI: When LLMs Meet Zero-Day Exploitation | Key Findings: Newly released framework called Hexstrike-AI provides threat actors with an orchestration “brain” that ... | AI blog | Checkpoint |
| 6.9.25 | PromptLock: The First AI-Powered Ransomware & How It Works | Introduction AI-powered malware has become quite a trend now. We have always been discussing how threat actors could perform attacks by leveraging AI models, and here we have a PoC demonstrating exactly that. Although it has not yet been | AI blog | Seqrite |
| 30.8.25 | Malicious Screen Connect Campaign Abuses AI-Themed Lures for Xworm Delivery | During a recent Advanced Continual Threat Hunt (ACTH) investigation, the Trustwave SpiderLabs Threat Hunt team identified a deceptive campaign that abused fake AI-themed content to lure users into executing a malicious, pre-configured ScreenConnect installer. | AI blog | TRUSTWAVE |
| 30.8.25 | LLM Security: Risks, Best Practices, Solutions | Large language models (LLMs), such as ChatGPT, Claude, and Gemini, are transforming industries by enabling faster workflows, deeper insights, and smarter tools. Their capabilities are reshaping how we work, communicate, and innovate. | AI blog | PROOFPOINT |
| 30.8.25 | First known AI-powered ransomware uncovered by ESET Research | The discovery of PromptLock shows how malicious use of AI models could supercharge ransomware and other threats | AI blog | Eset |
| 23.8.25 | Cybercriminals Abuse AI Website Creation App For Phishing | We are often asked about the impact of AI on the threat landscape. While we have observed that large language model (LLM) generated emails or scripts have so far had little impact, some AI tools are lowering the barrier for entry for digital crime. Take, for example, services that can create websites in minutes with the help of AI. | AI blog | PROOFPOINT |
| 23.8.25 | Investors beware: AI-powered financial scams swamp social media | Can you tell the difference between legitimate marketing and deepfake scam ads? It’s not always as easy as you may think. | AI blog | Eset |
| 17.8.25 | What the White House’s AI Action Plan Means for Infrastructure and Cybersecurity Leaders | The White House’s AI Action Plan, titled “Winning the AI Race”, marks a strategic shift in how the U.S. government aims to lead in artificial intelligence while securing its technological foundations. | AI blog | Eclypsium |
| 16.8.25 | AI wrote my code and all I got was this broken prototype | Can AI really write safer code? Martin dusts off his software engineer skills to put it it to the test. Find out what AI code failed at, and what it was surprisingly good at. Also, we discuss new research on how AI LLM models can be used to assist in the reverse engineering of malware. | AI blog | CISCO TALOS |
| 26.7.25 | Sophos X-Ops explores why larger isn’t always better when it comes to solving security challenges with AI | AI blog | SOPHOS | |
| 26.7.25 | Revisiting Bare Metal Server Security in the Age of AI | The adoption of bare metal cloud services for AI workloads has accelerated significantly, driven by performance requirements that virtualized environments struggle to meet. | AI blog | Eclypsium |
| 19.7.25 | SophosAI at Black Hat USA ’25: Anomaly detection betrayed us, so we gave it | Sophos’ Ben Gelman and Sean Bergeron will present their research on enhancing command line classification with benign anomalous data at Las Vegas | AI blog | SOPHOS |
| 19.7.25 | Old Miner, New Tricks | FortiCNAPP Labs uncovers Lcrypt0rx, a likely AI-generated ransomware variant used in updated H2Miner campaigns targeting cloud resources for Monero mining. | AI blog | FORTINET |
| 19.7.25 | Preventing Zero-Click AI Threats: Insights from EchoLeak | A zero-click exploit called EchoLeak reveals how AI assistants like Microsoft 365 Copilot can be manipulated to leak sensitive data without user interaction. This entry breaks down how the attack works, why it matters, and what defenses are available to proactively mitigate this emerging AI-native threat. | AI blog | Trend Micro |
| 12.7.25 | Black Hat SEO Poisoning Search Engine Results For AI | ThreatLabz | Zscaler ThreatLabz researchers recently uncovered AI-themed websites designed to spread malware. The threat actors behind these attacks are exploiting the popularity of AI tools like ChatGPT and Luma AI. | AI blog | ZSCALER |
| 12.7.25 | Catching Smarter Mice with Even Smarter Cats | Explore how AI is changing the cat-and-mouse dynamic of cybersecurity, from cracking obfuscation and legacy languages to challenging new malware built with Flutter, Rust, and Delphi. | AI blog | FORTINET |
| 5.7.25 | AI Dilemma: Emerging Tech as Cyber Risk Escalates | As AI adoption accelerates, businesses face mounting cyber threats—and urgent choices about secure implementation | AI blog | Trend Micro |
| 2.7.25 | Okta observes v0 AI tool used to build phishing sites | Okta Threat Intelligence has observed threat actors abusing v0, a breakthrough Generative Artificial Intelligence (GenAI) tool created by Vercelopens in a new tab, to develop phishing sites that impersonate legitimate sign-in webpages. | AI blog | OKTA |
| 28.6.25 | Check Point Research discovered the first known case of malware designed to trick AI-based security tools | AI blog | Checkpoint | |
| 14.6.25 | AI is Critical Infrastructure: Securing the Foundation of the Global Future | AI data centers are critical infrastructure now. The U.S. investment in AI is nearing a trillion dollars, and new agreements between global superpowers and hyperscaler companies are turning AI into what recent congressional testimony from the Center for Strategic and International Studies described as “the defining competition of the 21st century.” | AI blog | Eclypsium |
| 7.6.25 | How Good Are the LLM Guardrails on the Market? A Comparative Study on the Effectiveness of LLM Content Filtering Across Major GenAI Platforms | We conducted a comparative study of the built-in guardrails offered by three major cloud-based large language model (LLM) platforms. We examined how each platform's guardrails handle a broad range of prompts, from benign queries to malicious instructions. | AI blog | Palo Alto |
| 7.6.25 | Lost in Resolution: Azure OpenAI's DNS Resolution Issue | In late 2024, Unit 42 researchers discovered an issue with Azure OpenAI’s Domain Name System (DNS) resolution logic that could have enabled cross-tenant data leaks and meddler-in-the-middle (MitM) attacks. This issue stemmed from a misconfiguration in how the Azure OpenAI API handled domain assignments, versus how the user interface (UI) handled them. | AI blog | Palo Alto |
| 1.6.25 | Trend Micro Leading the Fight to Secure AI | New MITRE ATLAS submission helps strengthen organizations’ cyber resilience | AI blog | Trend Micro |
| 24.5.24 | Trend Secures AI Infrastructure with NVIDIA | Organizations worldwide are racing to implement agentic AI solutions to drive innovation and competitive advantage. However, this revolution introduces security challenges—particularly for organizations in highly regulated industries that require data sovereignty and strict compliance. | AI blog | Trend Micro |
| 24.5.24 | Using Agentic AI & Digital Twin for Cyber Resilience | Learn how Trend is combining agentic AI and digital twin to transform the way organizations protect themselves from cyber threats. | AI blog | Trend Micro |
| 24.5.24 | The Sting of Fake Kling: Facebook Malvertising Lures Victims to Fake AI Generation Website | In early 2025, Check Point Research (cp<r>) started tracking a threat campaign that abuses the growing popularity of AI content generation platforms by impersonating Kling AI, a legitimate AI-powered image and video synthesis tool. Promoted through Facebook advertisements, the campaign directs users to a convincing spoof of Kling AI’s website, where visitors are invited to create AI-generated images or videos directly in the browser. | AI blog | Checkpoint |
| 17.5.24 | Trend Micro Puts a Spotlight on AI at Pwn2Own Berlin | Get a sneak peak into how Trend Micro's Pwn2Own Berlin 2025 is breaking new ground, focusing on AI infrastructure and finding the bugs to proactively safeguard the future of computing. | AI blog | Trend Micro |
| 10.5.24 | Exploring PLeak: An Algorithmic Method for System Prompt Leakage | What is PLeak, and what are the risks associated with it? We explored this algorithmic technique and how it can be used to jailbreak LLMs, which could be leveraged by threat actors to manipulate systems and steal sensitive data. | AI blog | Trend Micro |
| 10.5.24 | AI Agents Are Here. So Are the Threats. | Agentic applications are programs that leverage AI agents — software designed to autonomously collect data and take actions toward specific objectives — to drive their functionality. | AI blog | Palo Alto |
| 25.4.25 | Deepfake 'doctors' take to TikTok to peddle bogus cures | Look out for AI-generated 'TikDocs' who exploit the public's trust in the medical profession to drive sales of sketchy supplements | AI blog | Eset |
| 25.4.25 | Will super-smart AI be attacking us anytime soon? | What practical AI attacks exist today? “More than zero” is the answer – and they’re getting better. | AI blog | |
| 19.4.25 | Top 10 for LLM & Gen AI Project Ranked by OWASP | Trend Micro has become a Gold sponsor of the OWASP Top 10 for LLM and Gen AI Project, merging cybersecurity expertise with OWASP's collaborative efforts to address emerging AI security risks. This partnership underscores Trend Micro's unwavering commitment to advancing AI security, ensuring a secure foundation for the transformative power of AI. | AI blog | Trend Micro |
| 19.4.25 | Care what you share | In this week’s newsletter, Thorsten muses on how search engines and AI quietly gather your data while trying to influence your buying choices. Explore privacy-friendly alternatives and get the scoop on why it's important to question the platforms you interact with online. | AI blog | Palo Alto |
| 19.4.25 | CapCut copycats are on the prowl | Cybercriminals lure content creators with promises of cutting-edge AI wizardry, only to attempt to steal their data or hijack their devices instead | AI blog | Eset |
| 12.4.25 | Incomplete NVIDIA Patch to CVE-2024-0132 Exposes AI Infrastructure and Data to Critical Risks | A previously disclosed vulnerability in NVIDIA Container Toolkit has an incomplete patch, which, if exploited, could put a wide range of AI infrastructure and sensitive data at risk. | AI blog | |
| 12.4.25 | GTC 2025: AI, Security & The New Blueprint | From quantum leaps to AI factories, GTC 2025 proved one thing: the future runs on secure foundations. | AI blog | |
| 12.4.25 | How Prompt Attacks Exploit GenAI and How to Fight Back | Palo Alto Networks has released “Securing GenAI: A Comprehensive Report on Prompt Attacks: Taxonomy, Risks, and Solutions,” which surveys emerging prompt-based attacks on AI applications and AI agents. While generative AI (GenAI) has many valid applications for enterprise productivity, there is also potential for critical security vulnerabilities in AI applications and AI agents. | AI blog | Palo Alto |
| 5.4.25 | The good, the bad and the unknown of AI: A Q&A with Mária Bieliková | The computer scientist and AI researcher shares her thoughts on the technology’s potential and pitfalls – and what may lie ahead for us | AI blog | Eset |
|
22.3.25 |
AI's biggest surprises of 2024 | Unlocked 403 cybersecurity podcast (S2E1) | Here's what's been hot on the AI scene over the past 12 months, how it's changing the face of warfare, and how you can fight AI-powered scams | AI blog | |
|
15.3.25 |
AI-Assisted Fake GitHub Repositories Fuel SmartLoader and LummaStealer Distribution |
In this blog entry, we uncovered a campaign that uses fake GitHub repositories to distribute SmartLoader, which is then used to deliver Lumma Stealer and other malicious payloads. The campaign leverages GitHub’s trusted reputation to evade detection, using AI-generated content to make fake repositories appear legitimate. |
||
|
15.3.25 |
Malicious use of AI is reshaping the fraud landscape, creating major new risks for businesses |
|||
| 8.3.25 | Exploiting DeepSeek-R1: Breaking Down Chain of Thought Security | DeepSeek-R1 uses Chain of Thought (CoT) reasoning, explicitly sharing its step-by-step thought process, which we found was exploitable for prompt attacks. | AI blog | Trend Micro |
| 8.3.25 | Martin Rees: Post-human intelligence – a cosmic perspective | Starmus highlights | Take a moment to think beyond our current capabilities and consider what might come next in the grand story of evolution | AI blog | |
| 1.3.25 | Bernhard Schölkopf: Is AI intelligent? | Starmus highlights | AI blog | Eset | |
|
22.2.25 | ||||
|
22.2.25 |
Neil Lawrence: What makes us unique in the age of AI | Starmus highlights |
|||
|
22.2.25 |
Roeland Nusselder: AI will eat all our energy, unless we make it tiny | Starmus highlights | |||
|
22.2.25 | ||||
|
22.2.25 |
This month in security with Tony Anscombe – January 2025 edition |
|||
|
22.2.25 | ||||
|
22.2.25 | ||||
|
22.2.25 | ||||
|
22.2.25 |
Investigating LLM Jailbreaking of Popular Generative AI Web Products |
This article summarizes our investigation into jailbreaking 17 of the most popular generative AI (GenAI) web products that offer text generation or chatbot services. | ||
|
18.1.25 | Cybersecurity and AI: What does 2025 have in store? | In the hands of malicious actors, AI tools can enhance the scale and severity of all manner of scams, disinformation campaigns and other threats | AI blog | |
|
11.1.25 | AI moves to your PC with its own special hardware | Seeking to keep sensitive data private and accelerate AI workloads? Look no further than AI PCs powered by Intel Core Ultra processors with a built-in NPU. | AI blog | |
|
4.1.25 | AI Pulse: Top AI Trends from 2024 - A Look Back | In this edition of AI Pulse, let's look back at top AI trends from 2024 in the rear view so we can more clearly predicts AI trends for 2025 and beyond. | AI blog | |
|
22.12.24 | Link Trap: GenAI Prompt Injection Attack | Prompt injection exploits vulnerabilities in generative AI to manipulate its behavior, even without extensive permissions. This attack can expose sensitive data, making awareness and preventive measures essential. Learn how it works and how to stay protected. | AI blog | Trend Micro |
|
21.12.24 | Philip Torr: AI to the people | Starmus Highlights | We’re on the cusp of a technological revolution that is poised to transform our lives – and we hold the power to shape its impact | AI blog | |
|
2.11.24 |
Deceptive Delight: Jailbreak LLMs Through Camouflage and Distraction | This article introduces a simple and straightforward technique for jailbreaking that we call Deceptive Delight. Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content. | AI blog | Palo Alto |
|
2.11.24 | How LLMs could help defenders write better and faster detection | Can LLM tools actually help defenders in the cybersecurity industry write more effective detection content? Read the full research | AI blog | Cisco Blog |
28.9.24 | Evolved Exploits Call for AI-Driven ASRM + XDR | AI-driven insights for managing emerging threats and minimizing organizational risk | AI blog | |
21.9.24 | Identifying Rogue AI | This is the third blog in an ongoing series on Rogue AI. Keep following for more technical guidance, case studies, and insights | AI blog | |
21.9.24 | AI security bubble already springing leaks | Artificial intelligence is just a spoke in the wheel of security – an important spoke but, alas, only one | AI blog | |
31.8.24 | AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI | This issue of AI Pulse is all about agentic AI: what it is, how it works, and why security needs to be baked in from the start to prevent agentic AI systems from going rogue once they’re deployed. | AI blog | |
31.8.24 | Unmasking ViperSoftX: In-Depth Defense Strategies Against AutoIt-Powered Threats | Explore in-depth defense strategies against ViperSoftX with the Trellix suite, and unpack why AutoIt is an increasingly popular tool for malware authors | ||
24.8.24 | Confidence in GenAI: The Zero Trust Approach | Enterprises have gone all-in on GenAI, but the more they depend on AI models, the more risks they face. Trend Vision One™ – Zero Trust Secure Access (ZTSA) – AI Service Access bridges the gap between access control and GenAI services to protect the user journey. | AI blog | |
24.8.24 | Securing the Power of AI, Wherever You Need It | Explore how generative AI is transforming cybersecurity and enterprise resilience | AI blog | |
24.8.24 | Rogue AI is the Future of Cyber Threats | This is the first blog in a series on Rogue AI. Later articles will include technical guidance, case studies and more. | AI blog | |
17.8.24 | Harnessing LLMs for Automating BOLA Detection | This post presents our research on a methodology we call BOLABuster, which uses large language models (LLMs) to detect broken object level authorization (BOLA) vulnerabilities. By automating BOLA detection at scale, we will show promising results in identifying these vulnerabilities in open-source projects. | AI blog | Palo Alto |
3.8.24 | AI and automation reducing breach costs – Week in security with Tony Anscombe | Organizations that leveraged AI and automation in security prevention cut the cost of a data breach by US$2.22 million compared to those that didn't deploy these technologies, according to IBM | AI blog | |
3.8.24 | Beware of fake AI tools masking very real malware threats | Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants | AI blog | |
27.7.24 | Researchers from Palo Alto Networks have identified two vulnerabilities in LangChain, a popular open source generative AI framework with over 81,000 stars on GitHub: | |||
13.7.24 | Declare your AIndependence: block AI bots, scrapers and crawlers with a single click | To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier... | AI blog | Cloudflare |
13.7.24 | The Top 10 AI Security Risks Every Business Should Know | With every week bringing news of another AI advance, it’s becoming increasingly important for organizations to understand the risks before adopting AI tools. This look at 10 key areas of concern identified by the Open Worldwide Application Security Project (OWASP) flags risks enterprises should keep in mind through the back half of the year. | AI blog | Trend Micro |
13.7.24 | The Contrastive Credibility Propagation Algorithm in Action: Improving ML-powered Data Loss Prevention | The Contrastive Credibility Propagation (CCP) algorithm is a novel approach to semi-supervised learning (SSL) developed by AI researchers at Palo Alto Networks to improve model task performance with imbalanced and noisy labeled and unlabeled data. | AI blog | Palo Alto |
6.7.24 | AI in the workplace: The good, the bad, and the algorithmic | While AI can liberate us from tedious tasks and even eliminate human error, it's crucial to remember its weaknesses and the unique capabilities that humans bring to the table | AI blog | Eset |
| 29.6.24 | ICO Scams Leverage 2024 Olympics to Lure Victims, Use AI for Fake Sites | In this blog we uncover threat actors using the 2024 Olympics to lure victims into investing in an initial coin offering (ICO). Similar schemes have been found to use AI-generated images for their fake ICO websites. | AI blog | Trend Micro |
| 29.6.24 | AI Coding Companions 2024: AWS, GitHub, Tabnine + More | AI coding companions are keeping pace with the high-speed evolution of generative AI overall, continually refining and augmenting their capabilities to make software development faster and easier than ever before. This blog looks at how the landscape is changing and key features of market-leading solutions from companies like AWS, GitHub, and Tabnine. | AI blog | Trend Micro |
| 15.6.24 | Explore AI-Driven Cybersecurity with Trend Micro, Using NVIDIA NIM | Discover Trend Micro's integration of NVIDIA NIM to deliver an AI-driven cybersecurity solution for next-generation data centers. Engage with experts, explore demos, and learn strategies for securing AI data centers and optimizing cloud performance. | AI blog | Trend Micro |
1.6.24 | AI in HR: Is artificial intelligence changing how we hire employees forever? | Much digital ink has been spilled on artificial intelligence taking over jobs, but what about AI shaking up the hiring process in the meantime? | AI blog | Eset |
1.6.24 | ESET World 2024: Big on prevention, even bigger on AI | What is the state of artificial intelligence in 2024 and how can AI level up your cybersecurity game? These hot topics and pressing questions surrounding AI were front and center at the annual conference. | AI blog | Eset |
25.5.24 | What happens when AI goes rogue (and how to stop it) | As AI gets closer to the ability to cause physical harm and impact the real world, “it’s complicated” is no longer a satisfying response | AI blog | Eset |
11.5.24 | RSA Conference 2024: AI hype overload | Can AI effortlessly thwart all sorts of cyberattacks? Let’s cut through the hyperbole surrounding the tech and look at its actual strengths and limitations. | AI blog | Eset |
| 6.4.24 | BEYOND IMAGINING – HOW AI IS ACTIVELY USED IN ELECTION CAMPAIGNS AROUND THE WORLD | Deepfake materials (convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates) are often disseminated shortly before election dates to limit the opportunity for fact-checkers to respond. Regulations which ban political discussion on mainstream media in the hours leading up to elections, allow unchallenged fake news to dominate the airwaves. | AI blog | Checkpoint |
| 2.3.24 | Deceptive AI content and 2024 elections – Week in security with Tony Anscombe | As the specter of AI-generated disinformation looms large, tech giants vow to crack down on fabricated content that could sway voters and disrupt elections taking place around the world this year | AI blog | Eset |
| 18.2.24 | All eyes on AI | Unlocked 403: A cybersecurity podcast | Artificial intelligence is on everybody’s lips these days, but there are also many misconceptions about what AI actually is and isn’t. We unpack the basics and examine AI's broader implications. | AI blog | Eset |
| 4.2.24 | Break the fake: The race is on to stop AI voice cloning scams | As AI-powered voice cloning turbocharges imposter scams, we sit down with ESET’s Jake Moore to discuss how to hang up on ‘hi-fi’ scam calls – and what the future holds for deepfake detection | AI blog | Eset |
14.1.24 | Love is in the AI: Finding love online takes on a whole new meaning | Is AI companionship the future of not-so-human connection – and even the cure for loneliness? | AI blog | Eset |