Subverting LLM Coders – Schneier on Security – Go Health Pro

Subverting LLM Coders – Schneier on Security – Go Health Pro

Subverting LLM Coders Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“: Abstract: Large Language Models (LLMs) have transformed code com-pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can … Read more

“학습만큼 망각이 필요”··· IBM이 강조하는 ‘LLM 언러닝’ – Go Health Pro

“학습만큼 망각이 필요”··· IBM이 강조하는 ‘LLM 언러닝’ – Go Health Pro

IBM 리서치의 사이언스 라이터(Science Writer)인 킴 마티노(Kim Martineau)가 ‘LLM에게 잊어버리라고 가르치는 이유’라는 블로그 콘텐츠를 통해 ‘대규모 언어 모델의 언러닝(large language model unlearning)’의 필요성과 중요성을 설명했다. 다음은 이를 요약한 내용이다. 머신 언러닝(Machine Unlearning)은 머신러닝(Machine Learning)의 반대 개념이다. 머신러닝이 다양한 데이터로 인공지능을 학습시켜 사람의 뇌처럼 기억하고 생각할 수 있도록 하는 기반을 만드는 것이라면, 머신 언러닝은 이러한 학습 … Read more

LLM attacks take just 42 seconds on average, 20% of jailbreaks succeed – Go Health Pro

LLM attacks take just 42 seconds on average, 20% of jailbreaks succeed – Go Health Pro

Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security. Pillar’s State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from … Read more

The Link Between Free Will and LLM Denial – Go Health Pro

The Link Between Free Will and LLM Denial – Go Health Pro

I think a hidden tendency towards a belief in Libertarian free will is at the root of people’s opinion that LLMs aren’t capable of reasoning. I think it’s an emotional and unconscious argument that humans are special, and that by extension—LLMs cannot possibly be doing anything like we are doing. But if you remember that … Read more

‘LLM hijacking’ of cloud infrastructure uncovered by researchers – Go Health Pro

‘LLM hijacking’ of cloud infrastructure uncovered by researchers – Go Health Pro

“LLM hijacking” of cloud infrastructure for generative AI has been leveraged by attackers to run rogue chatbot services at the expense of victims, Permiso researchers reported Thursday. Attacks on AWS Bedrock environments, which support access to foundational large language models (LLMs) such as Anthropic’s Claude, were outlined in a Permiso blog post, with a honeypot … Read more

x