COMMENTARY: Microsoft Security’s artificial intelligence (AI) security team recently shared its findings from a multi-year study that involved red teaming 100 generative AI (GenAI) products.The findings from their report are telling. The most important lesson comes in the form of consensus that GenAI systems create and amplify security risks – and that humans, not machines, are central to improving and securing AI.[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]This finding impacts everyone – from those developing software to those who purchase, deploy, or issue updates across their organization. From organizations developing and releasing software, to those purchasing that software, to those using AI tooling for their IT and other business operations — the risks posed to this ecosystem have never been greater.Because of the growing interconnectedness between AI and enterprise software development and consumption, AI and the software supply chain are one and the same. Organizations that either produce or consume enterprise software solutions need to prepare for the risks AI presents.Here’s how AI writes code, being embedded in our software ecosystem — and launching a new generation of software supply chain attacks.AI is the developerThe analyst firm Gartner predicts that by 2028, 75% of enterprise software engineers will use AI-code assistants. This represents a growing reliance on AI to automate software engineering tasks. The astonishing growth of the AI-coding tools market alone, which was projected to grow from $4.3 billion in 2024 to $12.6 billion by 2028, highlights this new reality.While this growth will undoubtedly breed innovation, it will also bring about major cybersecurity risks for software producing- and consuming-organizations. That’s because the AI tools themselves lack the experience, context, and awareness that a human software engineer possesses. Human software teams are essential in discerning high- from low-quality code — and identifying unsafe components.AI-coding assistants train on code that contains known and patched vulnerabilities, deprecated encryption algorithms, and outdated open-source components. Even worse, these AI assistants can produce new software supply chain security risks that traditional application security testing (AST) tools can’t easily detect. Take for example data poisoning, which may corrupt the learning models an AI-coding tool relies on. Human oversight, context, and checking has been critical in addressing these issues.The rising reliance on these tools reinforces Microsoft’s warning that GenAI systems can create and amplify security risks.AI is the modelMachine learning (ML) models are powerful, and the use and application of large language models (LLMs) and GenAI for AI systems and tools have exploded. We see this measure of growth in Hugging Face, a leading ML model sharing platform, which in September 2024 hit 1 million ML models on its platform, up from 300,000 in 2023. Platforms like Hugging Face are essential, in that they offer tools, services, and an online community that facilitates the development, modification and deployment of ML models. These models are becoming more pervasive, and not always disclosed.However, just like any other software-based product or component, ML models and the platforms they reside on are at risk of being compromised or manipulated by malicious actors. Attackers can exploit this technology to execute malicious commands, steal or corrupt sensitive data, perform espionage, and compromise an organization’s system.AI is the attackerGood-willed organizations aren’t the only ones with the mindset of “work smarter, not harder.” Threat actors have already begun to embrace LLMs and GenAI to sharpen and automate their attacks. It’s essential that organizations producing or consuming enterprise software become aware of these AI-powered threats — and understand how to mitigate them.Security firms have already proved that AI-generated malware is real. And while GenAI in its current form has not become proficient enough to create malware from scratch, it can modify existing malware samples to make them more difficult to detect and mitigate. For example, HP threat researchers identified a malware campaign spreading the AsyncRAT malware using VBScript and JavaScript, which the team concluded was written with the assistance of GenAI.At the same time, LLMs have proven efficient at code scanning – making it easier than ever for software producers to analyze code for security vulnerabilities and other flaws. Unfortunately, threat actors are just as capable at using this code scanning technology to fuel devastating software supply chain attacks. This has already been seen in how attackers leverage LLMs to scan open-source software repositories in search of flaws they can exploit.Mind over machine (learning)AI having become the software developer, the product, and the attacker is poised to overwhelm organizations. Enterprises already dealing with security tool sprawl and overworked security teams need to consider the most effective mitigation strategy for each of these scenarios.Because AI and the software supply chain are now interconnected, enterprise software producing and consuming organizations must get their ducks in a row by centering their efforts away from traditional testing and toward comprehensive software supply chain security. This has become a requirement because AI lowers the bar for pulling off sophisticated supply chain attacks. And the next generation of AI tools, agentic AI will arrive — and present yet more risk, with ML agents producing their own code.With software complexity about to explode, enterprise security teams need a powerful, modern, and all-encompassing software supply chain security approach, one that can verify the outputs of AI-coding assistants and the makeup of ML models. It’s the only way we can stop the coming AI-generated attacks.Saša Zdjelar, chief trust officer, ReversingLabsSC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.
