New VPN Backdoor – Schneier on Security – Go Health Pro

New VPN Backdoor A newly discovered VPN backdoor uses some interesting tactics to avoid detection: When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can’t be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a … Read more

Ultralytics Supply-Chain Attack – Schneier on Security – Go Health Pro

Ultralytics Supply-Chain Attack Last week, we saw a supply-chain attack against the Ultralytics AI library on GitHub. A quick summary: On December 4, a malicious version 8.3.41 of the popular AI library ultralytics ­—which has almost 60 million downloads—was published to the Python Package Index (PyPI) package repository. The package contained downloader code that was … Read more

Trust Issues in AI – Schneier on Security – Go Health Pro

Trust Issues in AI For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from … Read more

AIs Discovering Vulnerabilities – Schneier on Security – Go Health Pro

AIs Discovering Vulnerabilities – Schneier on Security – Go Health Pro

AIs Discovering Vulnerabilities I’ve been writing about the possibility of AIs automatically discovering code vulnerabilities since at least 2018. This is an ongoing area of research: AIs doing source code scanning, AIs finding zero-days in the wild, and everything in between. The AIs aren’t very good at it yet, but they’re getting better. Here’s some … Read more

Subverting LLM Coders – Schneier on Security – Go Health Pro

AIs Discovering Vulnerabilities – Schneier on Security – Go Health Pro

Subverting LLM Coders Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“: Abstract: Large Language Models (LLMs) have transformed code com-pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can … Read more

x