Same Prompt, Different Minds: What 3 LLMs Taught Me About AI in the Classroom
- S B
- Jul 9
- 6 min read
Updated: Jul 10
Published in Towards AI

The growing fear today is that AI models will make us less intelligent.
A widely circulated study from MIT pointed out that brain activity decreased with ChatGPT use. The public response was mixed; AI enthusiasts quickly pointed to the study's small sample size and specific task design as potential methodological flaws, while skeptics seized on it as confirmation of their fears, declaring, "See, I told you so." This polarized reaction brings us to the heart of two big issues in AI: development (how AI is built) and implementation (how it is used).
As someone who uses the Big 3 AI models (Claude, ChatGPT, and Gemini) with paid subscriptions, I have yet to choose one model as my go-to, simply because they excel in different things and approach their responses very differently.
To see how the different models "polish" our words, I simply gave each the same block of text to "proofread". But the larger question I was curious about, and which this experiment illuminates, is: how do these AI models truly impact student learning in the classroom?
The Test Case
I gave the AI models the draft of one of my paragraphs from the introduction section above:
proofread: As someone who uses the Big 3 AI models (Claude, ChstGPT, and Gemini) with paid subscriptions, I have to yet to choose one model as my go to . They excel in different things and approach their responses very differently
Below are the full responses provided by each AI model to the same request.
Claude AI's Response
As someone who uses the Big 3 AI models (Claude, ChatGPT, and Gemini) with paid subscriptions, I have yet to choose one model as my go-to. They excel at different things and approach their responses very differently.
Changes made:
Fixed "ChstGPT" → "ChatGPT"
Removed "to" from "have to yet" → "have yet"
Changed "go to" → "go-to" (hyphenated when used as a noun)
Changed "excel in" → "excel at" (more natural preposition)
Added missing period at the end
ChatGPT's Response
Sure! Here's a polished version of your text:
"As someone who uses the Big 3 AI models—Claude, ChatGPT, and Gemini—with paid subscriptions, I have yet to choose one as my go-to. They each excel in different areas and approach their responses in very different ways."
Let me know if you'd like it to sound more formal, casual, or tailored to a specific context (like a blog post, review, etc.).
Gemini's Response
Option 1 (More concise) As a paid subscriber to the "Big 3" AI models (Claude, ChatGPT, and Gemini), I've yet to choose a go-to. They each excel in different areas and approach responses very differently.
Option 2 (Slightly more formal) As a paid subscriber to the three major AI models (Claude, ChatGPT, and Gemini), I have yet to select a preferred one. Each model excels in different aspects and approaches its responses distinctly.
Option 3 (Emphasizing the "go-to" aspect) Though I'm a paid subscriber to the "Big 3" AI models (Claude, ChatGPT, and Gemini), I haven't settled on a single go-to. They all have unique strengths and respond in very different ways.
Here's a breakdown of the changes and why they improve the text:
"have to yet to choose" to "have yet to choose" or "I've yet to choose": This corrects the grammatical error and makes it more natural. 'I've yet to choose' is a bit more conversational.
"ChstGPT" to "ChatGPT": Corrects the typo.
"the Big 3 AI models (Claude, ChstGPT, and Gemini)" to "the 'Big 3' AI models (Claude, ChatGPT, and Gemini)": Adding quotation marks around "Big 3" clarifies it's a common term or nickname.
"go to" to "go-to": When used as a noun or adjective, "go-to" is hyphenated.
"They excel in different things" to "They each excel in different areas" or "Each model excels in different aspects": "Things" is a bit informal; "areas" or "aspects" are more professional. Adding "each" or starting with "Each model" emphasizes the individual differences.
"approach their responses very differently" to "approach responses very differently" or "approach its responses distinctly/very differently": Making it "approach responses" is slightly more concise. If you want to emphasize each model's distinct approach, "its responses distinctly" is a good option.
What This Reveals About AI Personalities
The responses reveal three completely distinct philosophies about what "proofreading" actually means to these AI models:
Claude acted like a grammar teacher. It fixed the errors, then explained each correction like it was grading a paper.
ChatGPT treated it like customer service. Quick fix, polished result, then "anything else I can help you with?" The response prioritizes speed and user satisfaction over explanation. It's efficient but you don't learn much about what was actually wrong.
Gemini went full academic. Multiple versions, detailed explanations, style analysis. It's like getting feedback from an overly enthusiastic writing tutor.
None of these approaches is inherently better or worse. But when you look at them side by side, something important becomes clear: these models aren't just processing text differently. They are operating from fundamentally different assumptions about what users want and need from AI assistance.
Deepening the Discussion on AI Personalities
The distinct "personalities" these AI models exhibit likely stem from differences in their underlying algorithms and training data.
ChatGPT's "customer service" approach may reflect training on vast amounts of conversational data with an emphasis on providing helpful, quick responses that satisfy user requests. Claude's "grammar teacher" persona appears to reflect training that prioritizes linguistic rules and educational explanations. These foundational differences in development help explain why each model approaches the same task with such different methodologies.
These personalities are not necessarily fixed. Educators might be able to guide these tools toward different behaviors. For instance, asking ChatGPT to "proofread this text and explain all grammatical changes" might produce output more similar to Claude's natural approach.
Understanding how to craft prompts for specific learning outcomes could empower educators to better utilize these tools, adapting each model's strengths to meet particular classroom needs rather than being limited by their default behaviors.
What Does This Mean for AI in the Classroom?
The selection of an AI model for classroom use is crucial, and its effective integration demands informed guidance from educators.
While older students in high school and college might be able to work alongside ChatGPT, this might not be the best model for younger learners. The output didn't just polish the draft, it changed wording and style choices, altering the voice.
A young learner who is shaping their creative identity needs guidance, not copy editing. Based on these results, ChatGPT appears better suited for quick brainstorming or summarizing tasks aimed at older students capable of critically evaluating its output.
Claude's version seems appropriate for young students learning how to write, ESL students, and most writers, as all benefit from grammatical clarity. These results suggest Claude is particularly useful for basic grammar exercises and sentence-level editing tasks.
Gemini's response was the most thorough, though it might be too verbose for some learners. It's an excellent teaching tool for educators wanting to demonstrate creative expression and grammatical fluency simultaneously. These findings indicate Gemini is best suited for advanced essay revision workshops, where detailed feedback and stylistic analysis are desired.
Questions for the Education Community
As AI models enter classrooms, we must collectively address several critical questions:
For AI Developers:
How are these models trained to support learning environments?
What type of responses are prioritized in educational use cases?
How should learners engage with these systems to achieve meaningful learning outcomes without compromising student voice?
For Policymakers and Educators:
How do we assess student learning when AI tools are used?
If students are using AI for "proofreading" or "organization," what skills are being developed and evaluated?
How can educators be trained to select the right AI model, prompt it effectively, and interpret its output in a way that supports student learning and preserves student voice?
How do we ensure students develop strong digital literacy and critical thinking skills when using AI?
For All Stakeholders:
What standards should guide the development and implementation of AI tools in educational settings?
How do we balance the efficiency AI provides with the deeper learning that comes from struggle and iteration?
Final Thoughts: AI, Collaboration, and the Student Voice
I wrote this article myself. I distilled my thoughts onto the canvas and specified the changes to sections as needed. So while I created the content, the AI assisted with organization, formatting and most importantly provided feedback on ideas. This is what AI collaboration looks like.
If the AI rewrites as ChatGPT did, we are erasing students' voices before they're even formed. If we are to integrate AI into education, we must make sure that it functions as a thoughtful collaborator without eclipsing the unique voice and critical thinking of the student.
Or perhaps as a society, we need to slow down and remind students of the joy of learning one step at a time.
Join the Conversation
"AI is the tool, but the vision is human." — Sophia B.
👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:
Let’s Connect
I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.
About the Author
Sophia Banton works at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.
Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.


