Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more A new algorithm developed by researchers ...
The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased ...
Update, March 21, 2025: This story, originally published March 19, has been updated with highlights from a new report into the AI threat landscape as well as a statement from OpenAI regarding the LLM ...
TEL AVIV, Israel, March 18, 2025 /PRNewswire/ -- Cato Networks, the SASE leader, today published the 2025 Cato CTRL™ Threat Report, which reveals how a Cato CTRL threat intelligence researcher with no ...
Security researchers took a mere 24 hours after the release of GPT-5 to jailbreak the large language model (LLM), prompting it to produce directions for building a homemade bomb, colloquially known as ...
Cybercriminals are hijacking mainstream LLM APIs like Grok and Mixtral with jailbreak prompts to relaunch WormGPT as potent phishing and malware tools. Two new variants of WormGPT, the malicious large ...
Security researchers find way to abuse Meta's Llama LLM for remote code execution Meta addressed the problem in early October 2024 The problem was using pickle as a serialization format for socket ...
AI frameworks, including Meta’s Llama, are prone to automatic Python deserialization by pickle that could lead to remote code execution. Meta’s large language model (LLM) framework, Llama, suffers a ...
PandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, ...