‘LLM hijacking’ of cloud infrastructure uncovered by researchers – Go Health Pro

‘LLM hijacking’ of cloud infrastructure uncovered by researchers – Go Health Pro

“LLM hijacking” of cloud infrastructure for generative AI has been leveraged by attackers to run rogue chatbot services at the expense of victims, Permiso researchers reported Thursday. Attacks on AWS Bedrock environments, which support access to foundational large language models (LLMs) such as Anthropic’s Claude, were outlined in a Permiso blog post, with a honeypot … Read more

Gemini for Workspace susceptible to indirect prompt injection, researchers say – Go Health Pro

Gemini for Workspace susceptible to indirect prompt injection, researchers say – Go Health Pro

Google’s Gemini for Workspace, which integrates its Gemini large-language model (LLM) assistant across its Workspace suite of tools, is susceptible to indirect prompt injection, HiddenLayer researchers said in a blog post Wednesday. Indirect prompt injection is a method of manipulating an AI model’s output by inserting malicious instructions into a data source the AI relies … Read more