By Byron V. Acohido
SAN FRANCISCO — If large language AI models are shaping our digital reality, then who—exactly—is shaping those models? And how the heck are they doing it?
Related: What exactly is GenAI?
Those are the questions Dr. Hidenori Tanaka wants to answer in an effort to put GenAI on solid scientific footing. And it’s the guiding ethos behind NTT Research’s launch of its newly spun-out Physics of Artificial Intelligence Group, which Tanaka will lead as founding director.
The announcement, which went live this morning at NTT’s Upgrade 2025 innovation conference, which I’m attending here at City View at Metreon, marks a major inflection point in the academic push to bring physics, neuroscience, and moral psychology to bear on generative AI.
Tanaka put it this way in a press release:
“The key for AI to exist harmoniously alongside humanity lies in its trustworthiness and how we approach the design and implementation of AI solutions. With the emergence of this group, we have a path forward to understanding the computational mechanisms of the brain and how it relates to deep learning models. Looking ahead, our research hopes to bring about more natural intelligent algorithms and hardware through our understanding of physics, neuroscience, and machine learning.”
At a press briefing, Tanaka gave an eye-opening presentation in which he framed the disruption that’s playing out over GenAI. Companies, in particular Big Tech, are in a mad scramble to leverage GenAi, without fully understanding or being able to explain the mathematics behind what happens when a machine instantly spews out deeply-correlated statements, in fluent, nuanced language in response to human queries.
Use cases already in the wild range from trivial, to productive, to profound. Unanswered questions about the privacy and security implications abound. “We need to treat AI a bit like parenting,” Tanaka told attendees. “We need to guide it.”
Tanaka is no stranger to complex systems. His early work in theoretical physics led him to machine learning, where he has spent the past five years quietly helping NTT explore what it calls the “black box” nature of AI. Now, his new division is tasked with opening that box—and turning what’s inside into something we can not only understand, but trust.
Newton’s apple vs ChatGPT’s vibe
During the Q&A session at the close of Tanaka’s talk, I asked him to help me frame my thinking about how physics is being brought to bear on the chaotic disruption triggered by the arrival of Gen AI.
Tanaka responded by tracing physics from particle collision research being done at the CERN complex in Geneva, Switzerland to the emergence of complex systems theory, which helped birth the transistor, which, in turn, gave us semiconductors. He emphasized that physics provides intuitive frameworks—often pictorial or even somewhat philosophical—for grasping abstract phenomena. Today, with GenAI, we face a similarly abstract challenge: understanding machine ‘personality,’ as Tanaka put it.
NTT Research CEO Kazu Gomi added a clarifying analogy: with classical physics, we can precisely predict how an apple falls—thanks to Newton’s laws and supporting math. In contrast, with GenAI, we know the output drops (so to speak), but we don’t fully understand the underlying mechanics or how changes in parameters affect the outcome. Gomi said NTT’s goal is to build the foundational understanding—akin to Newtonian physics—that will eventually let us predict and control GenAI behavior with similar precision.
“AI is at the stage where we know the apple drops,” Gomi said. “But we don’t fully understand the forces at work—or how to steer them.”
Tanaka’s team is attempting to do just that—by treating AI systems not simply as code, but as emergent cognitive systems worthy of deep theoretical analysis. The group’s three core missions:
•Deepen scientific understanding of how AI models learn and predict;
•Create controllable AI environments using experimental physics models;
•Embed trust into the architecture itself—not as an afterthought.
If that sounds lofty, it is. But it also echoes themes I reported on at Upgrade 2024, when NTT first hinted that the real revolution wouldn’t be killer robots or algorithmic takeovers—but rather the quiet, profound transformation triggered by millions of people simply conversing with machines.
We’re now deep into that shift. My big takeaway from Tanaka’s presentation was this: while no established algorithmic frameworks yet exists for predicting how GenAi might bend it’s default architecture — its Robot Laws, if you will — physics offers tools to begin structuring such understanding, potentially guiding us toward a new cognitive model of intelligence.
Chatbots as citizens
A striking moment at the press conference reinforced this. Salomé Beyer Vélez, senior journalist with Colombia’s Espacio Media Incubator, asked how Tanaka plans to bridge fields as different as physics, psychology, and philosophy.
Tanaka’s answer didn’t hedge. He described language models like ChatGPT and Grok as “new citizens”—not sentient beings, but active participants in society, already influencing how we communicate, decide, and even think.
Tanaka suggested that AI chatbots, like it or not, are new citizens of the world, like mice or children. “If AI chatbots are new citizens in the world, what kind of person do we want?”
Tanaka sees his role as building a framework to answer that—not philosophically, but mathematically. He wants to take open-ended moral inquiries—What does it mean to be kind? What should an AI value?—and turn them into testable systems that guide development and deployment.
That framing is already influencing policy—Tanaka’s research on the limits of fine-tuning was cited in recent U.S. AI safety guidelines. Another paper on conceptual recombination in generative models has informed efforts to reduce toxic outputs in image generation platforms.
His new group will continue collaborations with Stanford, Harvard, and Princeton, and while academically driven, the implications stretch far beyond the lab.
“There’s so much demand coming from enterprise,” Gomi said. “We believe Tanaka’s work can help inform real-world deployment decisions.”
Why this matters
There’s a reason this announcement feels timely. Generative AI is evolving faster than any public framework can keep pace with—and Big Tech continues to shape it behind closed doors. If researchers like Tanaka can help build a science of AI behavior—one rooted in transparency, not just utility—it could help rebalance the equation.
The urgency to do this hit actually had hit me as I was making my way to PAE to catch my flight to SFO. YouTube’s algorithms kindly suggested I might want to listen to a dramatized version of Rep. Jasmine Crockett’s congressional grilling of Elon Musk, produced by a slick YouTube channel called Elite Stories. I clicked the stream (you can, too: here) mindlessly, and for a moment, I was riveted—until I realized what I was listening to wasn’t an actual transcript. It was AI-assisted narrative theater: dramatic monologues, polished pacing, even a fictional inner voice for Crockett (“her mouth twitched,” the narrator declared). Musk came off as calm and principled. Crockett was rendered reactive, verging on unhinged.
This wasn’t a documentary—it was a mirror, controlled by a YouTuber seeking to monetize clicks.
By contrast, what Tanaka laid out at Upgrade 2025 is something very different. NTT’s Physics of AI group isn’t spinning narratives for clicks. They’re building scientific scaffolding to understand how AI learns, what it amplifies, and—most critically—how it might be guided. Tanaka himself confessed to “talking a lot lately” to ChatGPT and Grok, and he offered an unsettling takeaway: we may not fully grasp the personalities we’re shaping as we optimize these systems for comfort, virality, and attention.
“The question isn’t just what AI says,” he told reporters. “It’s what kind of person we’re unconsciously turning it into.”
NTT’s bet is that a little physics—and a lot more interdisciplinary humility—might help us get there. Will it pay off? Let’s hope so. I’ll keep watch and keep reporting.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.