New LLM jailbreak uses models’ evaluation skills against them – Go Health Pro

New LLM jailbreak uses models’ evaluation skills against them – Go Health Pro

A new jailbreak method for large language models (LLMs) takes advantage of models’ ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more. The “Bad Likert Judge” multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and … Read more

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models – Go Health Pro

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models – Go Health Pro

Oct 29, 2024Ravie LakshmananAI Security / Vulnerability A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part … Read more

Choosing the best AI models for your business – Go Health Pro

Choosing the best AI models for your business – Go Health Pro

From digital assistants to code generation, many small to medium businesses (SMBs) harbor lofty ambitions for generative AI (GenAI). The next step, however, is just as significant: whether to build their AI initiatives from scratch or simply secure a quick win with an existing AI tool. For a resource-strapped business, this decision comes with a … Read more

Chinese AI groups get creative to drive down cost of models – Go Health Pro

Chinese AI groups get creative to drive down cost of models – Go Health Pro

Stay informed with free updates Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox. Chinese artificial intelligence companies are driving down costs to create competitive models, as they contend with US chip restrictions and smaller budgets than their Western counterparts. Start-ups such as 01.ai and DeepSeek have reduced prices … Read more

WitnessAI is building guardrails for generative AI models – Go Health Pro

WitnessAI is building guardrails for generative AI models – Go Health Pro

Generative AI makes stuff up. It can be biased. Sometimes it spits out toxic text. So can it be “safe”? Rick Caccia, the CEO of WitnessAI, believes it can. “Securing AI models is a real problem, and it’s one that’s especially shiny for AI researchers, but it’s different from securing use,” Caccia, formerly SVP of … Read more

x