The AI Discourse Needs More Intellectual Honesty
- Apr 23
- 6 min read

AI chatbots have been criticized for not saying "I don't know." I wonder who they learned it from.
First Principles
AI is software and software is a tool.
That is first principles. You can build on that definition, but you cannot omit it. Yet, all the definitions around AI seem to be about the idea of AI instead of what it actually is.
It takes cognitive dissonance to declare that AI is not a tool with a straight face. Would we say that a knife isn't a tool? A car? A computer? Or even more comparatively, Microsoft Excel? How then can one honestly posit that a software is not a tool? Is it that AI isn't a tool, or that those who make the claim need it to be something else?
By refusing to define AI as software, critics can treat it as a nebulous, autonomous entity. If AI is just "an ideology," it sounds like a force of nature that we can only observe and critique. If AI is software, then there is a developer, an architect, a product owner, and a regulatory body.
Software has a lifecycle, a version history, and a kill switch.
Bucketization
When people lump a computer vision model that identifies a malignant tumor in a medical device with a consumer chatbot that writes a birthday poem, they aren't just being vague, they're being reckless. One is a validated, regulated software tool with a specific functional requirement to preserve life. The other is a probabilistic text generator. Both share the same underlying math, but one is a functional tool built for a specific job, and the other is a generative tool built for open-ended output. Equating them creates a "grey fog" where the high standards required for medical engineering are diluted by the flaws of consumer-grade chatbots.
One AI is built to save lives. The other is built for entertainment. We cannot have a serious conversation about regulation if we refuse to distinguish between a tool built for functional integrity and one built for engagement.
The Gold Rush of Both Promise and Peril in the AI Discourse
There is a growing industry of professional evangelists and critics who are equally neglecting their obligation to be intellectually honest with the public. While one side sells the "promise" of a utopia, the other sells the "peril" of an autonomous monster. Is the debate truly a pursuit of public safety and utility, or a battle for social and intellectual capital? Both sides share a common goal: to keep the technology wrapped in a narrative of "mystery" that only they are qualified to interpret. The discourse replaces operational definitions with narrative ones, where what the software does gets displaced by what AI means.
It is here that the definition of AI as software gets most obscured. For software is neither a monster nor a savior. The ethical gold rush and enterprise gold rush alike center these individuals as the high priests, prioritizing the social discourse and commercialization of the tool over the material reality of its construction and its potential for real-world utility.
The use of words like "hype-men" or "ghosts in the machine" does more harm than good. Attacking each other or selling fears does not help the public understand the technology. You cannot claim to speak on behalf of the public while obscuring the first principles of what you're discussing.
Gatekeeping Virtue
Some participants of the AI discourse have adopted the position that they alone have a humanist perspective on the technology. Some argue that only philosophers can define what AI is and what it means, while others minimize the complexity of the technology and the engineering that produced it. Yet both positions betray the very humanism they claim. To be pro-human means you respect the rights and agency of other humans. You leave space for disagreement and understanding. You're humble enough to ask questions and accept that there are things you yourself will never be an expert on.
We have seen this type of rhetoric before. We gatekeep virtue while denying other humans the right to participate in movements they found applicable to their own experiences. Pop star, Beyoncé, was "told" she couldn't be a feminist because of the way she dresses. The same argument is used to minimize stay-at-home mothers for not being feminist enough.
The AI discourse does the same. Case in point, the term 'stochastic parrot' began as a technical critique of how language models generate text. In public discourse, it has become shorthand for dismissal, alleging that the user is incapable of thinking on their own. In parallel, others argue that if you don't use the technology you're afraid of progress.
There's a massive difference between critique and contempt. Critique looks for ways to make a system safer or better. Contempt just seeks to devalue others to maintain a position of superiority, whether intellectual, technological, or moral.
As a biologist by training, I am also offended on behalf of the parrot.
Bias About Bias
Minimizing people's contributions to the industry, whether they build the technology, validate it, commercialize it, distribute it, regulate and govern it, or evaluate its impact on society and people, does none of us any good. This minimization is drenched in the very biases some of these participants claim exist in AI systems. Again I ask: where did the models learn this?
The notion that only technical people can understand AI, or only humanities majors can tell us what it really means, is drowning in the bias that humanity or expertise is only definable by certain people. Humans are not a monolith. Making space for disagreement is how our species has evolved to its current capacity.
On both sides of the gold rush, bias shapes how AI is communicated to the public. But the messaging often fails to point out the first principles truth that AI is software and software is a tool.
Refusing to Say I Don't Know
If you asked me how the engine of my car works, I would say I don't know.
If you asked me how trickle-down economics works in the real world, I would say I don't know.
If you asked me what the major principles of designing a workspace are, I would say I don't know.
Saying I don't know is neither an admission of ignorance nor a reason to feel insecure. As people, there are many things that we don't know and will never know. That is our default configuration.
Yet in the AI discourse, nearly no one says it:
A linguist writes about medical AI without pausing to say "I don't know about regulated clinical systems."
A futurist forecasts AI timelines without saying "I don't know about regulatory constraints."
A philosopher writes about deployment without saying "I don't know about production environments."
An evangelist pitches AI solutions without saying "I don't know about the failure modes."
The refusal to concede a boundary of expertise is the central move, because conceding one would humanize the speaker into being part of the "uninformed" public they claim to be protecting or transforming.
So again we visit the concept of first principles. AI is software. Everything else is downstream. It would be intellectually dishonest for me to say that AI as a technology, like all technologies, doesn't have externalities. However it is equally dishonest for public voices to define AI based on what it could be about, for example, an ideology, a driver of wealth inequality, a cause of youth addiction, while bypassing first principles.
AI is software that can shape ideologies.
AI is software that can lead to unbalanced distribution of wealth globally.
AI is software that can lead to addictive behaviors in the youth.
All of these things are true. But by including the first principles definition of software, it signals immediately that this is a governable technology. One for which we can activate the proverbial kill switch.
Noise or Progress
A tool without purpose becomes noise, not progress. The current discourse is drowning in noise, avoiding the definition of a tool because it carries responsibility.
What AI is used for is where the conversation should settle. Identifying the purposes lets us prepare for and address the downstream risks. But without space for disagreement, we simply become, ironically, parrots shouting at each other.
If humility is the inner fruit of life, perhaps the entire industry should be served a slice of humble pie.
Join the Conversation
"AI is the tool, but the vision is human." — Sophia B.
👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:
Let’s Connect
I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.
About the Author
Sophia Banton is an AI leader working at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.
Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.


