Microsoft fixes CVSS 9.9 vulnerability in Azure AI Face service – Go Health Pro

Microsoft has fixed a critical vulnerability in its Azure AI Face service deemed worthy of a CVSS score of 9.9 potentially leading to elevation of privileges over a network.

Azure AI Face is a cloud-based facial recognition service that is capable of detecting, analyzing and recognizing human faces. The service can be used by developers to integrate facial recognition capabilities into applications for purposes such as biometric identity verification, liveness detection, touchless access control or automatic redaction of faces from videos, according to Microsoft.

Microsoft was light on details, but it is safe to assume the vulnerability is a discrepancy between the way images are analyzed and processed by Azure AI Face.

The vulnerability, tracked as CVE-2025-21415, is classified as an authentication bypass by spoofing flaw, which would have allowed privilege escalation by an “authorized attacker,” according to a security update by the Microsoft Security Response Center (MSRC) published last week.

The flaw could have been exploited remotely by an attacker with low privileges, and the attack complexity was classified as low, with no interaction needed from victim users. The vulnerability posed a high threat to confidentiality and system integrity and could have led to a total loss of availability to legitimate users, leading to the critical CVSS score of 9.9.

MSRC reported that the vulnerability was not exploited in the wild, but a proof-of-concept exploit was available. The flaw was reportedly to Microsoft anonymously and the fix deployed by Microsoft requires no customer action to resolve the flaw.

Further details about the nature of the vulnerability or exploit were not disclosed, and Microsoft did not immediately respond to questions from SC Media about CVE-2025-21415.

It is unclear whether the authentication bypass involved spoofing of facial data through the use of a deepfake or other exploit.

Deepfakes – AI-generated imitations of a person’s likeness – are considered a potential threat to facial recognition-based authentication systems, with a 2024 Gartner report predicting that 30% of companies would lose confidence in facial biometric verification systems by 2026.

Attackers may target facial biometric systems through presentation attacks – where a facial imitation is placed in front of a camera or scanner – or through digital injection attacks that bypass a physical camera and directly input imagery into a system’s data stream.

Injection attacks increased by 200% in 2023, according to Gartner, demonstrating the increasing sophistication of biometric authentication bypass methods, facilitated by deepfake technology.

On the same day that CVE-2025-21415 was disclosed, Microsoft also disclosed another elevation of privilege vulnerability affecting Microsoft accounts, tracked as CVE-2025-21396. This vulnerability was given a high CVSS score of 7.5 and could have been leveraged by an unauthorized attacker for privilege escalation due to a missing authorization flaw.

Like the Azure AI Face flaw, CVE-2025-21396 was fixed without the need for customer action. While this flaw did not pose a threat to confidentiality or integrity, according to CVSS base score metrics, it could have led to a total denial of access to legitimate users. The flaw also had no publicly available exploit code and was not exploited in the wild, according to MSRC. Discovery of the flaw was credited to a researcher known as Sugobet.

“The level of resilience demonstrated by the response to this missing authentication function by Microsoft is a positive thing for digital consumers. This is the way technology is supposed to work and the way enterprise software vendors establish trust in the marketplace,” Saviynt Chief Trust Officer Jim Routh said in a comment to SC Media.

Leave a Comment

x