AI blog     2024  2023  2022  2021  2020  2019  2018

AI blog  APT blog  Attack blog  BigBrother blog  BotNet blog  Cyber blog  Cryptocurrency blog  Exploit blog  Hacking blog  ICS blog  Incident blog  IoT blog  Malware blog  OS Blog  Phishing blog  Ransom blog  Safety blog  Security blog  Social blog  Spam blog  Vulnerebility blog  2024  2023

DATE

NAME

Info

CATEG.

WEB

28.9.24

Evolved Exploits Call for AI-Driven ASRM + XDRAI-driven insights for managing emerging threats and minimizing organizational riskAI blog

Trend Micro

21.9.24

Identifying Rogue AIThis is the third blog in an ongoing series on Rogue AI. Keep following for more technical guidance, case studies, and insightsAI blog

Trend Micro

21.9.24

AI security bubble already springing leaksArtificial intelligence is just a spoke in the wheel of security – an important spoke but, alas, only oneAI blog

Eset

31.8.24

AI Pulse: Sticker Shock, Rise of the Agents, Rogue AIThis issue of AI Pulse is all about agentic AI: what it is, how it works, and why security needs to be baked in from the start to prevent agentic AI systems from going rogue once they’re deployed.AI blog

Trend Micro

31.8.24

Unmasking ViperSoftX: In-Depth Defense Strategies Against AutoIt-Powered Threats

Explore in-depth defense strategies against ViperSoftX with the Trellix suite, and unpack why AutoIt is an increasingly popular tool for malware authors

AI blog

Trelix

24.8.24

Confidence in GenAI: The Zero Trust ApproachEnterprises have gone all-in on GenAI, but the more they depend on AI models, the more risks they face. Trend Vision One™ – Zero Trust Secure Access (ZTSA) – AI Service Access bridges the gap between access control and GenAI services to protect the user journey.AI blog

Trend Micro

24.8.24

Securing the Power of AI, Wherever You Need ItExplore how generative AI is transforming cybersecurity and enterprise resilienceAI blog

Trend Micro

24.8.24

Rogue AI is the Future of Cyber ThreatsThis is the first blog in a series on Rogue AI. Later articles will include technical guidance, case studies and more.AI blog

Trend Micro

17.8.24

Harnessing LLMs for Automating BOLA DetectionThis post presents our research on a methodology we call BOLABuster, which uses large language models (LLMs) to detect broken object level authorization (BOLA) vulnerabilities. By automating BOLA detection at scale, we will show promising results in identifying these vulnerabilities in open-source projects. AI blogPalo Alto

3.8.24

AI and automation reducing breach costs – Week in security with Tony AnscombeOrganizations that leveraged AI and automation in security prevention cut the cost of a data breach by US$2.22 million compared to those that didn't deploy these technologies, according to IBMAI blog

Eset

3.8.24

Beware of fake AI tools masking very real malware threatsEver attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistantsAI blog

Eset

27.7.24

Vulnerabilities in LangChain Gen AI

Researchers from Palo Alto Networks have identified two vulnerabilities in LangChain, a popular open source generative AI framework with over 81,000 stars on GitHub:

AI blog

Palo Alto

13.7.24

Declare your AIndependence: block AI bots, scrapers and crawlers with a single clickTo help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier... AI blogCloudflare

13.7.24

The Top 10 AI Security Risks Every Business Should KnowWith every week bringing news of another AI advance, it’s becoming increasingly important for organizations to understand the risks before adopting AI tools. This look at 10 key areas of concern identified by the Open Worldwide Application Security Project (OWASP) flags risks enterprises should keep in mind through the back half of the year.AI blogTrend Micro

13.7.24

The Contrastive Credibility Propagation Algorithm in Action: Improving ML-powered Data Loss PreventionThe Contrastive Credibility Propagation (CCP) algorithm is a novel approach to semi-supervised learning (SSL) developed by AI researchers at Palo Alto Networks to improve model task performance with imbalanced and noisy labeled and unlabeled data.AI blogPalo Alto

6.7.24

AI in the workplace: The good, the bad, and the algorithmicWhile AI can liberate us from tedious tasks and even eliminate human error, it's crucial to remember its weaknesses and the unique capabilities that humans bring to the tableAI blogEset
29.6.24ICO Scams Leverage 2024 Olympics to Lure Victims, Use AI for Fake SitesIn this blog we uncover threat actors using the 2024 Olympics to lure victims into investing in an initial coin offering (ICO). Similar schemes have been found to use AI-generated images for their fake ICO websites. AI blogTrend Micro
29.6.24AI Coding Companions 2024: AWS, GitHub, Tabnine + MoreAI coding companions are keeping pace with the high-speed evolution of generative AI overall, continually refining and augmenting their capabilities to make software development faster and easier than ever before. This blog looks at how the landscape is changing and key features of market-leading solutions from companies like AWS, GitHub, and Tabnine. AI blogTrend Micro
15.6.24Explore AI-Driven Cybersecurity with Trend Micro, Using NVIDIA NIMDiscover Trend Micro's integration of NVIDIA NIM to deliver an AI-driven cybersecurity solution for next-generation data centers. Engage with experts, explore demos, and learn strategies for securing AI data centers and optimizing cloud performance.AI blogTrend Micro

1.6.24

AI in HR: Is artificial intelligence changing how we hire employees forever?Much digital ink has been spilled on artificial intelligence taking over jobs, but what about AI shaking up the hiring process in the meantime?AI blogEset

1.6.24

ESET World 2024: Big on prevention, even bigger on AIWhat is the state of artificial intelligence in 2024 and how can AI level up your cybersecurity game? These hot topics and pressing questions surrounding AI were front and center at the annual conference.AI blogEset

25.5.24

What happens when AI goes rogue (and how to stop it)As AI gets closer to the ability to cause physical harm and impact the real world, “it’s complicated” is no longer a satisfying responseAI blogEset

11.5.24

RSA Conference 2024: AI hype overloadCan AI effortlessly thwart all sorts of cyberattacks? Let’s cut through the hyperbole surrounding the tech and look at its actual strengths and limitations.AI blogEset
6.4.24BEYOND IMAGINING – HOW AI IS ACTIVELY USED IN ELECTION CAMPAIGNS AROUND THE WORLDDeepfake materials (convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates) are often disseminated shortly before election dates to limit the opportunity for fact-checkers to respond. Regulations which ban political discussion on mainstream media in the hours leading up to elections, allow unchallenged fake news to dominate the airwaves.AI blogCheckpoint
2.3.24Deceptive AI content and 2024 elections – Week in security with Tony AnscombeAs the specter of AI-generated disinformation looms large, tech giants vow to crack down on fabricated content that could sway voters and disrupt elections taking place around the world this yearAI blogEset
18.2.24All eyes on AI | Unlocked 403: A cybersecurity podcastArtificial intelligence is on everybody’s lips these days, but there are also many misconceptions about what AI actually is and isn’t. We unpack the basics and examine AI's broader implications.AI blogEset
4.2.24Break the fake: The race is on to stop AI voice cloning scamsAs AI-powered voice cloning turbocharges imposter scams, we sit down with ESET’s Jake Moore to discuss how to hang up on ‘hi-fi’ scam calls – and what the future holds for deepfake detectionAI blogEset

14.1.24

Love is in the AI: Finding love online takes on a whole new meaningIs AI companionship the future of not-so-human connection – and even the cure for loneliness?AI blogEset