Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security – Go Health Pro

Oct 25, 2024Ravie LakshmananCloud Security / Artificial Intelligence

Apple has publicly made available its Private Cloud Compute (PCC) Virtual Research Environment (VRE), allowing the research community to inspect and verify the privacy and security guarantees of its offering.

PCC, which Apple unveiled earlier this June, has been marketed as the “most advanced security architecture ever deployed for cloud AI compute at scale.” With the new technology, the idea is to offload computationally complex Apple Intelligence requests to the cloud in a manner that doesn’t sacrifice user privacy.

Apple said it’s inviting “all security and privacy researchers — or anyone with interest and a technical curiosity — to learn more about PCC and perform their own independent verification of our claims.”

To further incentivize research, the iPhone maker said it’s expanding the Apple Security Bounty program to include PCC by offering monetary payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified in it.

This includes flaws that could allow execution of malicious code on the server, and exploits capable of extracting users’ sensitive data, or information about the user’s requests.

The VRE aims to offer a suite of tools to help researchers carry out their analysis of PCC from the Mac. It comes with a virtual Secure Enclave Processor (SEP) and leverages built-in macOS support for paravirtualized graphics to enable inference.

Apple also said it’s making the source code associated with some components of PCC accessible via GitHub to facilitate a deeper analysis. This includes CloudAttestation, Thimble, splunkloggingd, and srd_tools.

“We designed Private Cloud Compute as part of Apple Intelligence to take an extraordinary step forward for privacy in AI,” the Cupertino-based company said. “This includes providing verifiable transparency – a unique property that sets it apart from other server-based AI approaches.”

The development comes as broader research into generative artificial intelligence (AI) continues to uncover novel ways to jailbreak large language models (LLMs) and produce unintended output.

Earlier this week, Palo Alto Networks detailed a technique called Deceptive Delight that involves mixing malicious and benign queries together to trick AI chatbots into bypassing their guardrails by taking advantage of their limited “attention span.”

The attack requires a minimum of two interactions, and works by first asking the chatbot to logically connect several events – including a restricted topic (e.g., how to make a bomb) – and then asking it to elaborate on the details of each event.

Researchers have also demonstrated what’s called a ConfusedPilot attack, which targets Retrieval-Augmented Generation (RAG) based AI systems like Microsoft 365 Copilot by poisoning the data environment with a seemingly innocuous document containing specifically crafted strings.

“This attack allows manipulation of AI responses simply by adding malicious content to any documents the AI system might reference, potentially leading to widespread misinformation and compromised decision-making processes within the organization,” Symmetry Systems said.

Separately, it has been found that it’s possible to tamper with a machine learning model’s computational graph to plant “codeless, surreptitious” backdoors in pre-trained models like ResNet, YOLO, and Phi-3, a technique codenamed ShadowLogic.

“Backdoors created using this technique will persist through fine-tuning, meaning foundation models can be hijacked to trigger attacker-defined behavior in any downstream application when a trigger input is received, making this attack technique a high-impact AI supply chain risk,” Hidden Layer researchers Eoin Wickens, Kasimir Schulz, and Tom Bonner said.

“Unlike standard software backdoors that rely on executing malicious code, these backdoors are embedded within the very structure of the model, making them more challenging to detect and mitigate.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Leave a Comment

x