Making headlines everywhere is the CopyFail Linux kernel vulnerability, which allows local privilege escalation (LPE) from any user to root privileges on most kernels and distributions. Local ...
Google warns prompt injection attacks are 32% up as hackers target GitHub Copilot, Claude and AI agents with $5,000 PayPal ...
Google's security team scanned billions of web pages and found real payloads designed to trick AI agents into sending money, ...
Google has analyzed AI indirect prompt injection attempts involving sites on the public web and noticed an increase in ...
Security researchers have discovered 10 new indirect prompt injection (IPI) payloads targeting AI agents with malicious ...
Researchers say a prompt injection bug in Google's Antigravity AI coding tool could have let attackers run commands, despite ...
A prompt injection flaw in Google’s Antigravity IDE turns a file search tool into a remote code execution vector, bypassing ...
Antigravity Strict Mode bypass disclosed Jan 7, 2026, patched Feb 28, enables arbitrary code execution via fd -X flag.
Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's (MCP) architecture ...
Microsoft assigned CVE-2026-21520, a CVSS 7.5 indirect prompt injection vulnerability, to Copilot Studio. Capsule Security discovered the flaw, coordinated disclosure with Microsoft, and the patch was ...
Prompt injection flaws in Microsoft Copilot Studio and Salesforce Agentforce let attackers weaponize form inputs to override agents' behavior and exfiltrate sensitive customer and business data.
Security leaders must adapt large language model controls such as input validation, output filtering and least-privilege access for artificial intelligence systems to prevent prompt injection attacks.