xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs – Krebs on Security – Go Health Pro

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs – Krebs on Security – Go Health Pro

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned. … Read more

‘Vibe coding’ using LLMs susceptible to most common security flaws – Go Health Pro

‘Vibe coding’ using LLMs susceptible to most common security flaws – Go Health Pro

“Vibe coding,” a recent trend of using large language models (LLMs) to generate code based on plain-language prompts, can yield code that is vulnerable to up to nine out of the top 10 weaknesses in the Common Weakness Enumeration (CWE), according to Backslash Security.Vibe coding, while only gaining popularity within the last few months, is … Read more

“Emergent Misalignment” in LLMs – Schneier on Security – Go Health Pro

“Emergent Misalignment” in LLMs Interesting research: “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs“: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are … Read more

Race Condition Attacks against LLMs – Go Health Pro

Race Condition Attacks against LLMs These are two attacks against the system components surrounding LLMs: We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs … Read more