DEV Community

Shawn knight
Shawn knight

Posted on • Originally published at Medium on

2025 ChatGPT Case Study: AI Does Not Think, Write, or Hallucinate — People Just Don’t Understand…

2025 ChatGPT Case Study: AI Does Not Think, Write, or Hallucinate — People Just Don’t Understand How It Works

The Problem With AI Research Today

Every day, new AI studies and articles pop up, claiming to analyze how AI writes , how it compares to human writing , and how it supposedly hallucinates incorrect information.

These discussions aren’t just misleading — they fundamentally misunderstand how AI works in the first place.

A recent article from TechXplore summarizes a study from Carnegie Mellon University, published in Proceedings of the National Academy of Sciences.

The study aimed to compare AI-generated text with human writing, analyzing differences in sentence structure, word choice, and grammatical patterns.

But here’s the real problem:

The entire premise of this study is flawed from the start.

AI Does Not Write — It Responds

One of the biggest misconceptions about AI is that it has an independent writing style.

It doesn’t.

AI doesn’t “write” in the same way humans do —  it interprets input and generates output based on statistical patterns in its training data.

This is where the Carnegie Mellon study misses the mark.

  • The researchers claim AI uses more present participles (e.g., “leaning,” “dancing”) than humans.
  • AI supposedly favors nominalizations (e.g., using “decision” instead of “decide”).
  • AI avoids agentless passive voice more than human writers.

All of this assumes AI is making independent stylistic choices.

It’s not.

AI adapts to the input it’s given, the dataset it was trained on, and the way users interact with it.

If you prompt AI to write in slang , it writes in slang.

If you prompt it to mimic Shakespeare , it does exactly that.

The real question isn’t “how AI writes.”

The real question is how people are prompting it.

There Is No “Hallucination Problem” — It’s Just Predictive Math

The same flawed thinking applies to the concept of AI hallucinations.

People act like AI is creating false information on purpose , but that’s not what’s happening.

AI models are trained on the internet , which is full of misinformation, bias, and contradictions.

When AI doesn’t “know” an answer, it predicts what’s most statistically probable  — and sometimes, that guess is wrong.

But let’s be real:

  • Google has been returning false results for decades.
  • Humans “hallucinate” wrong information all the time.
  • Academics and journalists publish incorrect statements regularly.

Why is AI held to a different standard?

If someone gets bad information from AI, that’s not a hallucination — that’s an unverified prediction.

And if people are blindly trusting AI responses without verifying them , that’s a user error, not an AI failure.

The Real Issue: Bad Research and Poor Understanding

The biggest problem in AI discourse isn’t AI itself — it’s the way people are studying and talking about it.

  • Researchers don’t seem to understand how AI is trained.
  • They treat AI as if it thinks independently , rather than recognizing it as a predictive tool.
  • They fail to separate AI’s generated output from the influence of human prompting.
  • They continue to obsess over AGI ( artificial general intelligence ) when we are nowhere near achieving it.

If AI models are generating a particular writing style, using certain words, or structuring sentences in a specific way , it’s because of the data they were trained on and the way users interact with them.

AI does not “prefer” anything — it follows patterns.

Stop Blaming AI, Start Understanding It

The Carnegie Mellon study, and others like it, do not prove that AI has a unique writing style or independent thought.

They prove that researchers don’t fully understand what they’re testing.

Instead of wasting time on flawed comparisons, we should be focusing on real, meaningful AI research:

  • How AI interacts with human cognition.
  • How we can refine input to get better output.
  • How AI can be used to improve learning, creativity, and execution.

Until people actually start using AI correctly, we’re going to keep seeing the same tired, uninformed studies being passed off as groundbreaking research.

AI does not write.

AI does not think.

AI does not hallucinate.

It’s time for people to stop projecting intelligence onto a machine that’s just doing predictive math.

If you enjoyed this, do three things:

Clap so I know to post more.

Leave a comment with your thoughts — I read & respond.

Follow if you don’t want to miss daily posts on AI search & visibility.

READ MORE OF THE 2025 CHATGPT CASE STUDY SERIES BY SHAWN KNIGHT

2025 ChatGPT Case Study: Monetization & Efficiency

2025 ChatGPT Case Study: Business Growth

2025 ChatGPT Case Study: Virality Formula

Sources

  1. TechXplore — “LLMs and Human Writing: How Different Are They?”
  2. Proceedings of the National Academy of Sciences — “Do LLMs Write Like Humans?”

Top comments (0)

Tiger Data image

🐯 🚀 Timescale is now TigerData: Building the Modern PostgreSQL for the Analytical and Agentic Era

We’ve quietly evolved from a time-series database into the modern PostgreSQL for today’s and tomorrow’s computing, built for performance, scale, and the agentic future.

So we’re changing our name: from Timescale to TigerData. Not to change who we are, but to reflect who we’ve become. TigerData is bold, fast, and built to power the next era of software.

Read more