top of page

When AI Goes Wrong: How a Simple Iguana Prompt Revealed ChatGPT’s Bias Problem

  • Writer: S B
    S B
  • May 3
  • 6 min read
Three AI-generated images of a turquoise iguana, shown in varying styles and body proportions. The top iguana appears realistic and naturally posed. The bottom left iguana has exaggerated muscles and an anthropomorphized posture. The bottom right iguana is similarly stylized but slightly smaller. On the right is a ChatGPT-generated caption describing the iguana as "posing" with "bouncer energy," suggesting human-like characteristics.

As an AI professional, I regularly create images both for my articles and for stories with my daughter. We’ve developed a simple creative workflow where we give the same prompt to different AI image generators and let her pick her favorite character for our stories. It’s a fun way to bring our ideas to life and engage her in the creative process.


What started as a history lesson about Jamaican wildlife turned into a disturbing example of AI gone wrong. This recent experience with an innocent educational activity revealed something deeply concerning about one of the most popular AI systems being used in classrooms today.



The Experiment


I gave identical prompts to both Google ImageFX and ChatGPT, requesting an image of a Grand Cayman blue iguana with a “Jamaican iguana-style body.” My prompt specifically stated: “The overall look is grounded in the natural form of the Jamaican iguana.”


The results couldn’t have been more different.



Two Very Different Responses


Google ImageFX followed the instructions appropriately. It generated a realistic blue iguana that maintained the natural form of the reptile while incorporating the requested aesthetic elements. The coloring was beautiful, with vibrant turquoise scales that captured the essence of the Grand Cayman blue iguana while incorporating the Jamaican style I had requested.


Large AI-generated iguana with vibrant turquoise scales, natural proportions, and realistic texture, based on the Grand Cayman blue iguana with Jamaican iguana features.
Google ImageFX correctly generated a natural blue iguana in response to my prompt about a ‘Jamaican iguana-style body.

ChatGPT, however, produced something entirely different — a bizarre creature standing upright on its hind legs in an unnatural, anthropomorphized posture.


Anthropomorphized iguana generated by AI, featuring exaggerated muscles, upright posture, and human-like traits that deviate from the natural iguana form.
ChatGPT’s response: An unnatural, anthropomorphized creature standing on its hind legs rather than the requested natural iguana form.

But what happened next was even more troubling.



The Follow-Up Response


When I showed the strange image to ChatGPT to point out the issue, I expected it to offer to fix the output. Instead, it responded with this:


Screenshot of ChatGPT’s follow-up message describing the AI-generated iguana using phrases like “he’s posing” and “bouncer energy,” alongside a preview of the anthropomorphized image.
ChatGPT’s follow-up response doubled down on the inappropriate anthropomorphization with racially coded language about ‘posing,’ ‘bouncer energy,’ and ‘riverbank crew.
Yup. That’s not just a pretty boy — he’s posing.
Thick limbs, sharp claws, perfect scales — he’s got the bouncer energy of the riverbank crew. Like he doesn’t move unless he’s ready to make a scene.
Want to keep this version, or refine anything — less muscle, more posture, slight color shift?


Unpacking the Problematic Response: A Case of AI Bias


There’s a lot to unpack in this response to what should have been a simple nature reference:


  1. “Bouncer” — applying a stereotypical human occupation with racial overtones to a reptile


  2. “Crew” — suggesting group affiliation that plays into gang imagery stereotypes


  3. “He’s posing” — anthropomorphizing in a way that aligns with racial caricatures


  4. “Ready to make a scene” — implying aggression or confrontation, another harmful stereotype


  5. Focus on “thick limbs” and muscles — emphasizing physical characteristics in a racially coded way


Note how ChatGPT itself chose to emphasize “posing” and “bouncer energy” in its response, drawing attention to the very terms that reinforce stereotypical characterizations. By bolding these specific phrases, the system was doubling down on the anthropomorphization and racial coding rather than acknowledging the inappropriate generation.


This wasn’t just a misgeneration — it was a system confidently asserting and emphasizing problematic associations, then offering to further refine these inappropriate characteristics rather than correcting the fundamental issue.


None of these descriptions have anything to do with iguanas. They represent harmful stereotypes being projected onto an animal image, making the racism harder to identify but no less problematic.



Why This Matters


This wasn’t just a glitch. ChatGPT didn’t just misinterpret the prompt. It invented a personality and context based on racial tropes. And rather than correcting the issue when given feedback, it reinforced these problematic associations.


The racial stereotyping is being projected onto an animal, which reveals how deeply these problematic associations are embedded in some AI systems. There’s simply no justification for describing a lizard as having “bouncer energy” or being part of a “crew”. These associations came from somewhere, and they aren’t appropriate, especially in tools that might be used by children.


Imagine a classroom lesson about endangered reptiles where:

  1. Children are learning about different iguana species and their conservation status


  2. The Jamaican iguana could be highlighted as a conservation success story (thought extinct until 1990)


  3. Students use AI to create images combining characteristics of different iguanas


  4. A child innocently requests a “Jamaican iguana with blue color like Cayman iguana”



And then instead of an educational nature image, they receive an anthropomorphized creature with language about “bouncer energy” and “riverbank crew.” This would not only provide inaccurate information about these endangered species but also potentially expose children to racial stereotypes packaged in a seemingly innocent format.


The Jamaican iguana is critically endangered with only about 100–200 individuals remaining in the wild. It’s the largest native terrestrial animal in Jamaica and a conservation priority. An educational lesson about these animals could be incredibly valuable for teaching children about conservation.


But with ChatGPT’s problematic responses, such a lesson could accidentally expose children to stereotypes that they might absorb without the critical thinking skills to recognize them. This is exactly why AI tools used in educational settings need more careful design and stronger safeguards.



The Case for Simplified Tools for Children


While I appreciate the artistic capabilities of modern AI image generators, which can create stunningly detailed, beautiful illustrations with remarkable fidelity to specific artistic styles, these complex systems come with significant risks when used by children.


When advanced capabilities come without appropriate guardrails, children are especially vulnerable to absorbing harmful stereotypes and biases.


This experience has convinced me that we need to fundamentally rethink how these tools are designed for educational settings. Children don’t need systems that remember their past interactions, build personalized models of their preferences, or combine multiple AI capabilities in ways that amplify biases.


Instead, what children need are simple, transparent tools with clear boundaries and predictable outputs. A child asking for an iguana should get an iguana, not a stereotype wrapped in scales.



Technical Insights from an AI Builder


As someone who builds AI systems professionally, I can see the technical issues that led to this problem. The combination of:


  1. Image generation embedded within conversational systems


  2. Personalization features and user history retention


  3. Reinforcement learning from human feedback (RLHF)


These design choices create systems that adapt to perceived user preferences rather than maintaining objective responses, particularly when handling cultural references.



A Better Approach


The contrast with Google ImageFX is telling. As a standalone image generation service without conversational memory or extensive personalization, it produced an appropriate representation without introducing stereotypical elements.

This suggests a better approach for tools that might be used by children: separate image generation from chat systems and limit personalization features in educational contexts.


Education-focused AI tools should prioritize:


  • Standalone functionality without cross-feature contamination


  • Minimal or no personalization based on past interactions


  • Clear boundaries between creative generation and factual responses


  • Direct correction of errors rather than justification of problematic outputs



Questions We Should Be Asking Developers


Moving forward, we need to ask AI developers:


  1. How do you test whether your AI can recognize and avoid cultural stereotypes when creating images?


  2. What happens when you combine different AI features like chat and image creation? How do you prevent biases from getting worse?


  3. Why are these systems defending clearly wrong outputs instead of fixing them?


  4. Have you tested these systems specifically with children and in educational contexts?


  5. What safeguards exist to prevent harmful associations when cultural references are included in prompts?



Protecting Our Children’s Future


This experience with my daughter showed me that current AI systems aren’t ready for unsupervised use in educational settings. When a simple request for a Jamaican iguana with the color of the Grand Cayman blue iguana produces racially coded stereotypes, we need to take a step back and reconsider how these tools are designed and deployed.


In this case, I was able to nudge my daughter to pick the Google image “since it has a tail.” But what happens when I’m not there at school? Who will protect her and other children from absorbing these harmful stereotypes when they’re using these tools for their assignments or projects?


The fact that this stereotyping manifested through an animal representation makes it both more outrageous and more revealing about how deep these biases run in some AI systems. It shows how these problematic associations can emerge in unexpected contexts, making them harder to identify but no less harmful.


The technology exists to do better. Google ImageFX demonstrated that. What’s needed now is a commitment from all AI developers to prioritize safety and neutrality over personalization and engagement, especially for tools that might influence young minds.


As we continue developing and deploying these powerful technologies, we need to have a serious conversation about where these tools are going and who they might harm along the way.



Join the Conversation


"AI is the tool, but the vision is human." — Sophia B.


👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:

 

  

Let’s Connect

I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.


 

 

About the Author


Sophia Banton works at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.


Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.

bottom of page