By Jim Alcove
The rise of AI co-pilots is exposing a critical security gap: sensitive data sprawl and excessive access permissions.
Related: Weaponizing Microsoft’s co-pilot
Until now, lackluster enterprise search capabilities kept many security risks in check—employees simply couldn’t find much of the data they were authorized to access.
But Microsoft Copilot changes the game, turbocharging enterprise search and surfacing sensitive information that organizations didn’t realize was exposed.
Many assume Copilot won’t share data externally and will respect existing user permissions, leading to a false sense of security. But the real problem isn’t whether Copilot stays within its lane—it’s that the lane is far too wide. If employees already have excessive access, Copilot simply makes that exposure more visible.
Patchwork fixes fall short
This reality is hitting hard. A recent Gartner survey found that 40% of IT managers have delayed Copilot deployments due to security concerns. I’ve spoken with numerous CIOs and CISOs who say these issues are directly impacting rollout plans at major enterprises.
Alkove
Microsoft’s response? Instead of pushing organizations toward a true “least privilege” model, it suggests running limited Copilot trials to see what data gets exposed. That’s a band-aid solution, not a fix.
Copilot isn’t the problem—it just amplifies an existing one. The real issue is the outdated, over-permissioned access models that have plagued enterprises for years.
Over-provisioned access
The risks of excessive access are nothing new. Identity-related issues have become the leading driver of security breaches in recent years. But many organizations still lack modern tools to manage access effectively.
Consider this: most organizations can’t answer basic questions about their own data security, including:
•Who has access to what?
•Where did they get it?
•How are they using it?
•Should they even have it?
The problem stems from legacy IAM systems and manual, piecemeal processes—entirely inadequate for today’s decentralized cloud, SaaS sprawl, and AI-driven environments.
AI’s promise vs. risk
AI thrives on data, but that same data introduces risk. One of the biggest threats isn’t AI itself—it’s the over-provisioned access policies that leave organizations vulnerable. Microsoft’s own data shows that 95% of granted permissions go unused. That’s the opposite of least privilege.
Efforts to classify and restrict sensitive data help, but they don’t address the underlying issue: employees having more access than they need in the first place.
Despite these risks, businesses are rapidly adopting AI, with privacy and security top of mind for leadership. Yet, without a fundamental shift in access management, organizations will continue exposing themselves to unnecessary threats.
Securing AI going forward
It’s time for organizations to move beyond the “check-the-box” approach to access security. Implementing a true least privilege model—where employees only have access to the data they actually need—isn’t optional anymore. It’s a necessity.
Modern IAM solutions must provide visibility, intelligence, and automation to restructure permissions and monitor AI-driven activity. Without these foundational steps, security risks will only grow alongside AI’s expanding capabilities.
The choice is clear: either organizations take control of access security now, or AI will expose its weaknesses for them.
About the essayist: Jim Alkove is co-founder and CEO of Oleria. He led security at Salesforce, Microsoft, and Google Nest, advises startups like Aembit and Snyk, and holds 50 U.S. patents. He earned an electrical engineering degree from Purdue University.