Moving from ML to LLM GPT in payments, #Stripe sets the standard – Go Health Pro

Moving from ML to LLM GPT in payments, #Stripe sets the standard – Go Health Pro

Just continuing the AI theme, it’s really interesting to see what Stripe – latest valuation over $90 billion – has adopted the technology. Effectively, it’s tried to create a GPT for payments. The reason they are pushing the envelope is after seeing great results in earlier developments using traditional machine learning models. These resulted in … Read more

Secure Code Reviews, LLM Coding Assistants, and Trusting Code – Rey Bango, Karim Toubba, Gal Elbaz – ASW #330 – Go Health Pro

Secure Code Reviews, LLM Coding Assistants, and Trusting Code – Rey Bango, Karim Toubba, Gal Elbaz – ASW #330 – Go Health Pro

Developers are relying on LLMs as coding assistants, so where are the LLM assistants for appsec? The principles behind secure code reviews don’t really change based on who write the code, whether human or AI. But more code means more reasons for appsec to scale its practices and figure out how to establish trust in … Read more

12K hardcoded API keys and passwords found in public LLM training data – Go Health Pro

12K hardcoded API keys and passwords found in public LLM training data – Go Health Pro

Roughly 12,000 hardcoded live API keys and passwords were found on Common Crawl, a large dataset used to train LLMs such as DeepSeek.Security pros say hardcoded credentials are dangerous because hackers can more easily exploit them to gain access to sensitive data, systems, and networks. The threat actor in this case practiced LLMJacking, in which cybercriminals … Read more

New LLM jailbreak uses models’ evaluation skills against them – Go Health Pro

New LLM jailbreak uses models’ evaluation skills against them – Go Health Pro

A new jailbreak method for large language models (LLMs) takes advantage of models’ ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more. The “Bad Likert Judge” multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and … Read more

Subverting LLM Coders – Schneier on Security – Go Health Pro

Subverting LLM Coders – Schneier on Security – Go Health Pro

Subverting LLM Coders Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“: Abstract: Large Language Models (LLMs) have transformed code com-pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can … Read more