By Byron V. Acohido
SAN FRANCISCO — The cybersecurity industry showed up here in force last week: 44,000 attendees, 730 speakers, 650 exhibitors and 400 members of the media flooding Moscone Convention Center in the City by the Bay.
Related: RSAC 2025 by the numbers
Beneath the cacophony of GenAI-powered product rollouts, the signal that stood out was subtler: a broadening consensus that artificial intelligence — especially the agentic kind — isn’t going away. And also that intuitive, discerning human oversight is going to be essential at every step.
Abdullah
Let’s start with Dr. Alissa “Dr. Jay” Abdullah, Mastercard’s Deputy CSO who gave a keynote address at The CSA Summit from Cloud Security Alliance at RSAC 2025. She spoke passionately about being a daily power user of AI, recounting an experiment in which she tried to generate a collectible 3D action figure of herself using multiple GenAI platforms.
Her prompts were clear, detailed, and methodical — yet the results were laughably off-base. The takeaway? Even well-crafted prompts can be derailed by flawed models or skewed training data. In this case, none of the models managed to reliably portray her likeness or professional context — despite the input being consistent.
AI needs a human chaperone
This wasn’t just a quirky user experience — it underscored deeper concerns about bias, hallucination, and the immaturity of enterprise-grade AI. Abdullah’s takeaway: lean in, yes. But test relentlessly, and don’t take the output at face value.
That kind of real-world friction — where AI promise meets AI reality — showed up again and again in RSAC’s meatier panels and threat briefings. The SANS Institute’s Five Most Dangerous New Attack Techniques panel highlighted how authorization sprawl is giving attackers frictionless lateral movement in hybrid cloud environments. The fix? Better privilege mapping and tighter identity controls — areas ripe for GenAI-powered solutions, if used responsibly.
Similarly, identity emerged as RSAC’s dominant theme, fueled by Verizon’s latest Data Breach Investigations Report showing credential abuse remains a top attack vector. Identity, as Darren Guccione of Keeper Security framed it, is the modern perimeter. Yet AI complicates the landscape: it can accelerate password cracking even as it enables smarter detection. Once again, the takeaway was clear — context, not hype, must drive deployment.
Krebs
Meanwhile, the emotional centerpiece of the conference came from Chris Krebs, the embattled former CISA director. Facing political heat at home, Krebs nonetheless took the stage alongside Jen Easterly and Rob Joyce to reflect on fictional and real-world cyber catastrophes. His call to arms was unflinching: “Cybersecurity is national security. Every one of you is on the front lines of modern warfare.”
And he’s right. Because behind the RSAC glitz lies a gnawing truth: complexity has outpaced human capacity. AI may be the only way defenders can keep up — if regulators allow it, and if we wield it wisely.
Customer-ready — on the fly
For all the stage talk about escalating threats, tightening regulations, and the urgent need to shore up identity defenses, it was the hallway conversations — the unscripted, sometimes offbeat stories from seasoned professionals — that offered the clearest glimpse of what comes next.
To wit: just a few moments after Mastercard’s Abdullah gave her keynote at the CSA Summit, I happen to run into a senior sales rep from a mobile app security firm, whom I’ve known for a few years. I asked him if he was using GenAI, and he shared how he has trained a personal agentic assistant to help field technical questions from prospects.
This veteran sales rep described how he uses ChatGPT to synthesize technical answers and generate customer-ready language on the fly. He told me he takes his responsibility to vet every GenAI output vigorously — especially when deploying it to come up with information relayed back to customers with engineering backgrounds. Any hint of a hallucinated response could destroy credibility he’s spent months building. So he validates, revises and retrains constantly. It’s not about cutting corners; it’s about enhancing fluency without sacrificing integrity, he told me.
Natively supported GenAI
I also had an enlightening discussion with Tim Eades, CEO of year-old Anetac, a GenAI-native platform focused on real-time identity risk, who offered sharp insight into why newer vendors have an inherent edge. Older enterprise systems, he explained, are like heritage homes that need to be put on stilts before the foundation can be replaced.
Retrofitting LLMs onto legacy infrastructure is not just expensive; it can be futile without rethinking data pipelines and user interfaces from the ground up. Because Anetac was built in the GenAI era, Eades told me, they can natively support real-time data integration, dynamic prompt generation, and intuitive user-level customization. This agility doesn’t just reduce hallucinations — it accelerates meaningful innovation, Eades asserts.
Curated knowledge sets
Meanwhile, Jason Keirstead, Co-founder and VP of Security Strategy of Simbian, a GenAI-native platform automating alert triage and threat investigation, walked me through how his team integrates LLMs into security operations workflows. We met in the nearby financial district, inside the high-rise offices of Cota Capital, one of Simbian’s early investors.
Unlike platforms that simply bolt on a chatbot and hope users will “talk to the AI,” Simbian embeds agentic AI directly into workflows—handling alert triage, threat hunting, and vulnerability prioritization behind the scenes, Keirstead told me. The user never interacts with a prompt window. Instead, Simbian’s internal RAG (retrieval-augmented generation) system, combined with extensive prompt libraries tuned for cybersecurity use cases, processes each alert and surfaces recommended actions automatically.
Keirstead didn’t downplay the complexity of making this work. While LLMs can still hallucinate, he emphasized that Simbian avoids generic, open-ended use cases in favor of tightly scoped applications. By combining curated knowledge sets, domain-specific tuning, and hands-on collaboration with early adopters, the company has engineered a system designed to deliver consistent, trustworthy results.
The 100X effect
A similar dynamic was at play at Corelight, a network detection and response provider focused on high-fidelity telemetry. I spoke with CEO Brian Dye who underscored how agentic AI is beginning to boost threat detection — but only when closely guided. Their team uses LLMs to streamline analysis of noisy telemetry and surface relevant insights faster.
Yet Dye cautioned that simply injecting a chatbot doesn’t cut it; analysts still need domain expertise to steer the tool, validate results, and keep it from introducing blind spots. It’s the human-machine combo, he emphasized, that delivers real value.
Meanwhile, John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, framed GenAI as a conversation accelerator — but only when harnessed with clarity and intent. He described how top-tier cybersecurity veterans are using it not to replace judgment but to distill technical nuance for broader audiences. This aligns with what some are calling the ‘100x effect’ — experienced practitioners using GenAI not to automate away their judgment, but to scale their expertise and speed of execution.
Must have skill: prompt engineering
Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, was especially candid. He explained how attackers are already using LLMs to write custom malware, simulate attacks, and bypass traditional defenses — at speed and scale. On defense, A10 has begun tapping GenAI to analyze DDoS telemetry in real time, dramatically reducing time-to-insight. The payoff? Analysts who know how to prompt effectively are seeing gains, but only after substantial trial-and-error. His bottom line: prompt engineering is now a frontline skill.
Akela
Anand Akela, CMO of Alcavio, a deception-driven threat detection company, sketched out a different angle: using AI not to interpret threats, but to camouflage critical assets. Alcavio blends traditional deception tech with AI-powered customization — generating realistic honeypots, honeytokens, and decoy credentials at scale. The idea is to use AI’s generative muscle to outwit AI-generated threats. Akela admitted they don’t rely on full-blown LLMs yet, but said their roadmap includes using GenAI to tailor decoy strategies dynamically, based on evolving attack vectors.
Guided speed, common sense
At Cyware, a cyber fusion platform unifying threat intelligence and incident response, Patrick Vandenberg, Senior Director of Product Marketing, emphasized speed. Their threat intelligence chatbot reduces days of manual triage to seconds, surfacing relevant patterns and flagging threats for human review.
But it’s not autopilot. The system only works well when guided by seasoned analysts who understand what to ask for — and how to interpret the results. It’s the classic augmentation model: the AI expands reach and efficiency, but the analyst still holds the reins.
Willy Leichter, CMO of PointGuard AI, startup focused on visibility and risk governance for GenAI use, captured the unease many feel. His firm helps companies discover and govern shadow AI projects — especially open-source tools and rogue models flowing into production. The market, he said, hasn’t had its “SolarWinds moment” for GenAI misuse yet, but everyone’s bracing for it. His message to worried CISOs: start with visibility, then layer on risk scoring and usage controls. And don’t let urgency erase common sense.
Driving resilience — not risk
Across each of these conversations, a common thread emerged: we’re beyond the point of deciding whether to use GenAI. The question now is how to use it well. The answer seems to hinge not on the models themselves, but on the context in which they’re deployed, the clarity of the prompts, and the vigilance of the humans overseeing them.
Agentic AI is here to stay. It’s versatile, powerful, and rapidly evolving. Agentic AI doesn’t wait to be prompted — it’s goal-driven, context-aware, and built to act. But like any high-performance engine, it demands an attentive driver. Without careful prompting, constant tuning, and relentless validation, even the most promising assistants can steer us off course. That tension — powerful augmentation versus potential misfire — defined the conference.
RSAC 2025 didn’t just showcase agentic AI’s momentum; it clarified the mandate. This isn’t about chasing silver bullets. It’s about embracing a tool that demands human vigilance at every turn.
If we want AI to drive resilience — not risk — we’ll need to stay firmly in the driver’s seat. I’ll keep watch and keep reporting.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)