MY TAKE: Beyond agentic AI mediocrity — the real disruption is empowering the disenfranchised – Go Health Pro

By Byron V. Acohido

Is agentic AI accelerating mediocrity? Plenty of folks on LinkedIn seem to think so.

Related: The 400th journalist

A growing chorus of academics, tech workers, and digital culture watchers are pointing out the obvious: the more we prompt, the more we flatten. Across marketing, B2B, and even journalism, GenAI is churning out clean, inoffensive, structurally sound content that says almost nothing. It’s the regression to the mean, algorithmically engineered.

But that’s only half the story. The other half is unfolding more quietly—and it’s far more disruptive.

While much of the world defaults to prompt-fed sameness, something more radical is happening beneath the surface. Agentic AI is lowering the barrier to entry for serious research. It’s helping small voices craft big ideas. It’s giving the under-resourced tools to compete with the over-credentialed.

This isn’t just about sparking creativity—it’s about enabling agency. Across classrooms, basements, retirement homes, and refugee camps, people who were never invited to the table are beginning to build their own.

Personalization seduction

Meanwhile, the professional class is drowning in a different kind of transformation—one that promises personalization, but delivers uniformity.

In the rush to embrace GenAI, companies across industries have flooded the zone with AI-shaped content—internal decks, outreach emails, whitepapers—hoping for differentiation through automation.

GenAI was supposed to be a leap forward—a way to personalize intelligence, accelerate insight, and elevate the average professional. But a strange thing is happening: the more people prompt these models, the more the results blur into sameness.

Stephen Klein, CEO of Curiouser.AI, recently captured this paradox in a viral LinkedIn post: “The more you prompt, the more you regress to the mean.” His point? That prompting feels like personalization, but actually leads to mass-produced output. Instead of sharpening our edge, we may be sanding it down—unaware we’re doing it.

Yet that’s just the surface layer. What’s more revealing is what happens when someone tries to break the pattern—when they use agentic AI to shape a well-structured, original insight. It often gets rejected. Not because it’s wrong, but because it sounds too clear. Too confident. Too different from the muddled content we’ve come to expect.

To be clear, the regression I’m referring to here is not just the legacy blandness of marketing collateral, but a new, more insidious sameness driven by over-reliance on GenAI prompting. We’ve moved from predictable corporate boilerplate to mass-produced GenAI content that flattens tone, language, and ideas into indistinguishable output. And ironically, when something manages to break that pattern—to stand out with real insight and narrative clarity—it’s often rejected precisely because it’s mistaken for the GenAI-generated fluff it was designed to transcend.

Regression to the mean

This regression shows up everywhere. I see it in B2B thought leadership drafts passed off by PR teams that are clearly engineered through SEO-fed prompt templates. Everyone’s content starts sounding the same: “next-gen, data-driven, transformation-ready.”

I see it on LinkedIn, where posts claiming to offer bold takes on digital transformation all carry the same flattened cadence, polished to the point of meaninglessness.

And I see it in emails—sometimes with the actual ChatGPT icon bullets still embedded in the pitch. A dead giveaway that someone copied and pasted straight from the prompt window without even bothering to disguise it. It’s not just lazy; it signals a lack of ownership over the message. The model might be powerful, but the thinking is paper-thin.

This contrast reveals something else entirely: a growing divide between those regressing to the GenAI mean—and those quietly escaping it. Let’s call it “Editorial Ascent.” It’s the inverse of regression. It’s what happens when agentic AI is combined with seasoned editorial intuition to produce clarity, originality, and decision-grade insight.

This growing divide matters. While much of B2B continues to flatten into GenAI monotony, a smaller set of organizations is quietly charting a different course—using AI not to replace judgment, but to enhance it.

Here are three real-world examples of Editorial Ascent in motion:

Cybersecurity adoption

At RSAC 2025, several companies modeled this hybrid approach to strong effect:

Corelight showcased its Investigator SaaS NDR platform, designed to automate alert triage and threat hunting across complex environments. What made their approach stand out wasn’t just the GenAI tooling—it was the guardrails. Corelight engineers built the platform with deep SOC analyst input, ensuring that automation amplified, rather than overrode, expert workflows.

Simbian demonstrated an agentic platform that embeds retrieval-augmented generation (RAG) into daily SOC workflows. Rather than giving analysts a chatbot, Simbian built AI agents that sit invisibly inside threat triage, vulnerability management, and hunting operations—doing the work of interpreting alerts and suggesting actions without demanding a new interface. The human remains in control, but the machine clears cognitive fog.

That flexibility mindset was echoed by Anetac CEO Tim Eades: native GenAI companies have a structural advantage because they’re architected for flexibility from the start. In contrast, legacy security vendors often struggle to retrofit brittle product stacks not designed for the fluid, iterative demands of agentic AI. As Eades put it, success now depends on the ability to adapt—not just automate.

This emphasis on adaptability extends beyond security. Salesforce, the CRM giant, has adopted GenAI internally not to replace its sales and marketing teams, but to support them. Their AI copilots help craft personalized email outreach, recommend next-best actions, and analyze customer signals — but every step is structured for human review. Rather than chasing full automation, Salesforce built its GenAI stack to enhance the decision cadence of experienced reps — a textbook case of agentic augmentation in action.

What we’re witnessing is a bifurcation. On one side, AI regression—the flattening of communication, the replacement of depth with prompt-fed output, the comfort of sounding like everyone else. On the other, Editorial Ascent—the effort to wield AI not as a ghostwriter, but as a collaborator, amplifying judgment without sacrificing nuance.

The examples from RSAC 2025—and from companies like Salesforce—aren’t just product stories. They’re signals that a new mode of communication is emerging. One where credibility isn’t manufactured by templates, but forged through clarity, context, and synthesis.

Ascent supercedes regression

This is the future of trust in B2B: not GenAI at the helm, but experienced minds guiding AI toward meaningful, differentiated output. Those who embrace this hybrid model—those who invest in editorial ascent—won’t just stand out. They’ll set the agenda.

And this divide won’t stop at B2B. We’re already beginning to see the same editorial tension spill into journalism, policymaking, and public discourse—where AI may shape the message, but trust still hinges on who frames it.

Because yes, Klein is right: prompting alone regresses to the mean. But the real danger isn’t just homogenization. It’s a failure to recognize the signal when it breaks through.

When something is dismissed as “too AI” simply because it’s well-crafted, it tells us everything we need to know: the models may run on probability—but authority still belongs to human judgment.

That’s why the real GenAI story isn’t just about regression to the mean—it’s about divergence at scale. On one end, you’ve got legacy institutions chasing efficiency, flattening their own voice in the process. On the other, a quiet uprising: individuals using these same tools to create meaning, forge identity, and punch above their weight.

I see it firsthand. I’m co-writing a novel with my 7-year-old grandson, Kekoa Kalani Acohido. He dreams the scenes; I guide the arc. The AI helps us shape the fire into form. That book, The Chronicles of Lumenox, isn’t just a story—it’s a small act of editorial ascent. A reminder that when access meets imagination, even the youngest among us can pierce the algorithmic haze with something unmistakably human.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

 

Leave a Comment