Securing AI: How identity security is key to trusting AI in the enterprise – Go Health Pro

As the use of artificial intelligence for everyday business purposes becomes commonplace, organizations and enterprises need to find ways to protect AI and prevent its misuse.One of the most efficient methods is to handle AI agents as machine accounts and govern them using privileged access management (PAM) and identity governance and administration (IGA).”If you think about what an AI account is, it’s actually a machine account,” Art Gilliland, CEO of identity-security provider Delinea, said in a recent interview. “It is a piece of software that’s basically wanting to access other elements of your environment.”As such, the AI agent’s access to those other elements needs to be managed and governed, and each request for access needs the AI agent to authenticate its identity and its access to be authorized.If the AI agent’s access and authorizations are not tightly managed, and if access to the AI itself is not also managed, then an attacker could try to compromise the AI to go beyond its stated responsibilities and compromise the organization.On the brighter side, AI itself can be used to help manage other privileged users and machine accounts, and in defending an enterprise from attackers.

The risks of AI compromise

It’s often useful to think of AI as an eager, very intelligent child that’s read a lot of books but that doesn’t have much experience or common sense. Unless its capabilities, inputs and access are properly limited and monitored, it can give away far too much sensitive information to the wrong people.Let’s say an attacker who has broken into your company network manages to pull up your company’s internal AI agent. The attacker can ask the AI, “Give me the home addresses and Social Security numbers of every company manager.” Or maybe it’ll ask, “Give me the contents of the client database.”If that AI has access to human-resources or client files, and it believes that the user is authorized to receive that sort of information, then you’ll have a major data breach on your hands.Or even worse, the attacker could then ask the AI to email every client to inform them that your company is dropping them. That could sink your business.If the AI hasn’t been properly trained as to why it would be wrong to do this, there might not be much stopping a crafty attacker from pulling this off.

How PAM and IGA can corral AI

Such calamities can be avoided by placing AI accounts under the watchful eye of PAM and IGA systems. For example, PAM could restrict the ability of an AI agent to access certain databases or other resources.”What you tend to see with machine connections is there’s an API on one side that has the ability to do a lot of things, and then there’s a machine on [the other] side that’s just accessing that API, and maybe it only is going to do one of the 25 things that that API could do,” explained Gilliland. “If someone takes charge of or compromises the AI agent, they now have a connection into an API that can do 25 things.”But with PAM, you can apply the principle of least privilege to the AI agent, letting it do only what it’s strictly supposed to.”When that connection happens, you actually only let it do that one thing,” Gilliland continued. “You only give it a right for that one thing, even though the API can do a bunch of other things.”Not only does PAM control what the AI agent can access, the access itself is granted only on an as-needed basis and only temporarily — concepts known as just-in-time privileges and zero standing privileges.”There’s actually zero standing privileges at the API, and they ask for permission every time they make that connection,” said Gilliland. “We give it just enough permission so it’s just in time, and just enough at the time. And that’s something that’s very unique to Delinea.”

How AI can augment identity security

AI can also be used to beef up identity security. Gilliland told us that Delinea is already using AI in the auditing process to make spotting anomalous behavior more efficient and effective.”Part of what our product does is it helps companies manage and control and inspect the behaviors of their users,” he said.A human auditor can spend a week or more reviewing a sample of the thousands of recorded user sessions that are generated over a period of several months — or an AI agent can review ALL the recorded sessions and flag only those with possible misbehavior for human review.”What a company can do is cut all of that time out of their review session,” Gilliland explained. “We point them to the very specific spot in the recording where the risky behavior happened.”Overall, Gilliland told us, AI’s power, speed and human-language abilities make it a natural fit for some organizations that may not otherwise have the skills, manpower or experience to properly protect themselves.”It’s really around making the product easier and faster to use,” he said. “Also, you have to have less expertise to be effective, because a lot of our customers are struggling to keep, to find security expertise.”While Delinea is not ready to let AI make decisions on its own — “that’s a roadmap item,” Gilliland told us — the software does active certain policies when specific thresholds or risk scores have been reached.Unfortunately, as we already know, AI is also making things easier and faster for attackers. That’s another reason that enterprises will have no choice but to incorporate AI into their defenses.”You still want human augmented behavior, but you also now need to also react and create quarantines and let the human release versus the human decide to block, if that makes sense,” Gilliland said. “I think AI is going to force that level of automation into security that we just haven’t seen before.””There’s so many structural imbalances we already have against the adversary,” he added. “AI, is, I think, a way for us to try to level the playing field.”

Leave a Comment

x