By Byron V. Acohido
Stephen Klein didn’t just stir the pot. He lit a fire.
Related: Klein’s LinkedIn debate
In a sharply worded post that quickly went viral on LinkedIn, the technologist and academic took direct aim at what he called the “hype-as-a-service” business model behind so-called agentic AI. His critique was blunt: what the industry is selling as autonomous, goal-directed intelligence is, in most cases, little more than brittle prompt chains and hard-coded workflows dressed up in fancy language.
In Klein’s view, most current agentic systems are glorified wrappers — task orchestrators stitched together from APIs and large language models. They’re not “agents,” he argues, unless they demonstrate the hallmarks of true autonomy: self-directed goal setting, adaptive reasoning, memory, and the ability to operate across changing environments with minimal human intervention. Anything less? Marketing noise.
To his credit, Klein struck a nerve. His post drew a wave of applause from engineers and skeptics frustrated by the overreach of AI branding. But the backlash was telling, too. A quieter chorus — industry practitioners, startup builders, a few thoughtful researchers — responded not with denial, but with a question: even if most of today’s systems aren’t fully agentic, aren’t they still meaningfully new?
Cybersecurity use cases
That’s where Klein’s clarity turns brittle. Because while his academic rigor is valuable, his framing misses what’s actually happening — not in the hype decks, but on the ground.
At RSAC 2025, I spoke with over a dozen cybersecurity vendors quietly integrating LLM-powered decision support into core operations. Simbian is using GenAI to power a co-pilot that helps SOC analysts prioritize alerts in real time. Corelight is using it to sift network telemetry for subtle threat patterns. Are these “agents” in the Kleinian sense? Not quite. Are they meaningfully changing how work gets done in high-stakes, regulated environments? Absolutely.
And it’s not just the security sector.
At NTT Data, I encountered one of the most grounded — and arguably most agentic — use cases yet. Their system currently uses traditional computer vision models to tag visual elements in live-stream video — helmet vs. no helmet, license plate vs. background. These pixel-level attributes guide Attribute-Based Encryption (ABE) that redacts content dynamically, preserving privacy while enforcing policy.
But what makes this truly next-gen is what comes next: NTT’s engineers are layering in Mistral, a compact, open-source vision-language model (VLM), locally fine-tuned to operate as a domain-specific AI agent. This is not a general-purpose chatbot. It’s an embedded model designed to interpret live video semantically — identifying nuanced events like theft or assault, flagging involved actors, and triggering differential encryption in real time.
In short: Mistral isn’t just adding inference — it’s becoming an embedded decision-maker. Trained on both public and private datasets, it brings contextual judgment to surveillance tasks that were once binary. That’s not hype. That’s a purpose-built agent system, architected for real-world autonomy under strict policy constraints.
Agentic AI citizens
Klein is right to call for clearer definitions. But in cases like this, the semantics are chasing something that’s already real — systems quietly reshaping how autonomy is engineered and applied.
Tanaka
Dr. Hidenori Tanaka, head of NTT’s Physics of AI group, takes this idea a step further. He envisions a future where LLM-enabled agents are not merely optimized for engagement, but purposefully designed with domain-specific personalities aligned to their intended use. Chatbots, he argues, are no longer inert tools; they’re new actors in the societal fabric—”citizens,” in his words—shaping human cognition through everyday interaction.
Tanaka’s central insight is that AI personality is not accidental. It’s engineered—through system prompts, training data, and corporate incentives. And this, he warns, creates macro-level effects: if AI is universally optimized for comfort or virality, it risks reinforcing polarization and eroding public trust. Instead, he calls for a scientific discipline that can translate open-ended moral questions—What should an AI value? What does it mean to be kind?—into measurable benchmarks and controllable behaviors.
His goal is not to anthropomorphize machines but to embed deliberate design into how agents evolve. He wants to transform LLM development from an ad hoc enterprise into a grounded, interdisciplinary science—one rooted in physics, psychology, and ethics, and capable of cultivating agents that support, rather than distort, our shared cognitive space.
The coining of “agentic AI”
The truth is, the term agentic AI didn’t begin in academia. It crept into the lexicon in mid-2024, as the generative wave matured. With tools like LangChain, OpenAI’s Agents SDK, and AutoGen, developers began building systems that could remember context, select tools, pursue goals, and adapt their next steps based on real-world outcomes. The industry needed language to describe what felt like a new capability — and agentic sounded right.
Ng
Thought leaders like Andrew Ng — a pioneering AI educator and founder of DeepLearning.AI — helped popularize the term agentic AI in 2023 and 2024. Through his newsletters, courses, and public commentary, Ng framed agentic systems as LLM-powered applications capable of goal-seeking behavior and multi-step coordination — a framing that gave the term significant traction among developers and enterprise adopters. By late 2024, it was everywhere: product sheets, panel discussions, investor pitches.
Critics like Klein saw this as definitional drift. But I’d argue it’s closer to natural language evolution — messy, organic, shaped by use, not decree.
Hard lines vs. gradient adoption
Which brings us to the present tension: academic purists want hard lines. Practitioners are working in gradients.
And while we absolutely need to push back on misleading claims — especially when real-world trust and safety are on the line — we should be careful not to flatten the conversation into a binary. Because much of what’s now labeled agentic AI may fall short of Klein’s threshold, but that doesn’t make it trivial.
The shift is real. We’re moving from tools that merely respond to input, to systems that help initiate, coordinate, and execute. It’s not artificial general intelligence. It’s not even full autonomy. But it is a different texture of software — and that matters.
In a recent essay I called Wither Genius?, I described how this shift is crowding the middle: the space once occupied by mid-tier professional fluency — the technical writer, the financial analyst, the policy drafter — is being compressed by LLMs that can now emulate structure and tone with alarming fluency. And yet, the upper and lower bounds of creativity — the instinct to ask a new question, the intuition to challenge the prompt — remain deeply human. The kind of genius expressed by Truman Capote’s narrative nuance, Rachel Springs’ inventive social worldbuilding, or Frank Herbert’s philosophical scaffolding in Dune is still far beyond what language models can conjure. That frontier remains ours — for now.
What we’re seeing is scaffolding being laid for something new. That scaffolding might not meet every checkbox on Klein’s autonomy rubric, but it’s already supporting workflows, insights, and decision models that didn’t exist two years ago.
A new kind of agency
More importantly, it’s enabling a new kind of agency — not just in machines, but in people.
You see it in the daikon farmer tuning a Hugging Face model to automate irrigation. In the local teacher tweaking GenAI lesson plans for her students — but refusing to let the model track them. In the musicians who launched a streaming radio station in my coastal hometown, co-composing their scripts with AI.
None of this fits neatly into Klein’s frame. But it’s happening. And it’s powerful.
So yes — let’s call out overhyped claims. Let’s raise the bar for what we mean by agentic. But let’s also recognize the deeper transformation underway. This is not just a semantic debate. It’s the early friction of a new human-machine relationship — one that’s still taking shape.
Klein wants to define the term. The rest of us are trying to define the future.
Let’s not confuse the two. I’ll keep watch – and keep reporting.
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)