top of page

The Hallucinated Truth

  • Merle Grimm
  • 7 days ago
  • 2 min read

I recently worked on a cover for a nonfiction book on flight. For the illustration, I researched the anatomy of birds.


ree

I was in a time rush and struggling to find good reference images. One friend asked why I didn’t use AI for my research. Isn’t that faster and more efficient?

I responded that even if I ignored all the ethical issues, including stolen data, copyright infringement, and the environmental impact AI has, there is also the problem of AI inventing things if it doesn’t know the answer. How can anyone be certain that the information provided is accurate? There is even a term for when AI makes up its own "facts": AI hallucination.

My friend assured me AI had gotten better. To prove his point, he pulled out his phone and queried Chat GPT, asking for "a picture of the correct anatomical bone structure of a bird viewed from above". We both did not expect the generated answer to end our argument with comedic tragedy. This is what was spat out.


Image generated using ChatGPT-5 on 17.08.2025
Image generated using ChatGPT-5 on 17.08.2025

There are many issues with this AI depiction. Just look at those bone feathers!

But it's been a while since this image was generated. And to be honest, I cannot exactly remember what my friend prompted word-for-word.

So, I decided to ask a second AI tool to generate an image. The prompt was "generate an anatomically correct photo-realistic image of a bird's skeleton viewed from above".


Image generated using Wix ADI (Artificial Design Intelligence) on 06.11.2025
Image generated using Wix ADI (Artificial Design Intelligence) on 06.11.2025

To compare, this is my illustration.

ree

The "mistakes" made by AI are evident in this example. The real danger is when the mistakes are not obvious.

The European Broadcasting Union (EBU) recently released a study showing AI hallucination in every third response. "Even if sources are provided and audiences want to dig deeper or check information for themselves, they face a range of obstacles, from sources which do not back up the claims assistants make to the sheer time it takes to disentangle and check the claims in a response" (Fletcher & Verckist, 2025).

Of course, AI has its uses; it is a good tool for data analysis. But it is not the tool for any artistic or research query. It is not a search engine, and it is not a designer.

If a human does not know the answer to a problem, they can research, reference books and studies or even simply state the inaccuracy.

AI will not do this. It lies.

Is my work a perfect depiction of a peregrine falcon’s anatomy? No, it is not. But I do not claim it is.

So what is the solution? I don't know. But I know what can help. Hire artists. Hire scientists.



Fletcher, J., & Verckist, D., News Integrity in AI Assistants (2025). European Broadcasting Union. Retrieved November 6, 2025, from https://www.ebu.ch/Report/MIS-BBC/NI_AI_2025.pdf.


 
 
 

Comments


bottom of page