Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

AI Hallucinations: When Machines Dream Like Humans

By Dr. Hemachandran K and Dr. Raul Villamarin Rodriguez

June 11, 2025

The Startling Truth About Artificial “Errors” That Reveal the Genius of Human-Like Intelligence

“In some sense, hallucination is all LLMs do. They are dream machines.” — Andrej Karpathy, Former Director of AI at Tesla

We were discussing AI with our students at Woxsen University when one mentioned asking ChatGPT about vacation destinations. The AI had described this incredible resort in “Palmyra Cove”—pristine beaches, world-class spas, the works. The student was genuinely excited until realizing this place doesn’t exist.

Our first thought? “Classic AI hallucination—another glitch to discuss in class.”

But then we started thinking: what if these AI “hallucinations” aren’t glitches at all? What if they’re the clearest sign yet that these systems are learning to think like us? That made-up resort might not be a bug—it could be evidence of something remarkable happening inside these machines.

Your Brain Is Already Hallucinating

Let us ask you something. Can you remember your last birthday? The cake, the people laughing, maybe someone singing off-key?

Here’s the uncomfortable truth we’ve learned from cognitive research: most of what you just “remembered” probably didn’t happen exactly that way.

Elizabeth Loftus proved this with an experiment we often share with our AI ethics classes. She showed people car crash footage, then asked them to estimate speeds. But here’s the twist—she changed just one word. Half heard “How fast were the cars going when they smashed into each other?” The other half heard “contacted each other?”

Same footage. One word different. Completely different answers.

The kicker: a week later, the “smashed” group vividly remembered seeing broken glass that was never there.

Your brain isn’t a video recorder. It’s more like a creative writer, constantly reconstructing stories from fragments and filling in the blanks with whatever seems plausible. Most of the time, you’re not experiencing reality—you’re experiencing your brain’s best guess about reality.

And honestly? This isn’t a design flaw. It’s exactly what makes us smart.

So How Is This Like AI?

When ChatGPT invented that resort, it wasn’t pulling data from some massive database. It was doing something eerily similar to what your brain does when you “remember” your birthday—taking patterns it learned and reconstructing something that feels real.

In our research at Woxsen, we’ve been studying exactly this parallel. When you ask ChatGPT to write a poem, it’s not copy-pasting from poetry.com. It’s learned how words flow together, how rhythm works with emotion, how metaphors create meaning. Then it weaves these patterns into something new.

This is predictive coding—the same process your brain uses. Both systems are pattern-matching machines that fill in gaps creatively. The more we’ve observed AI systems, the more we’re convinced this isn’t coincidence—it’s convergent evolution toward handling incomplete information.

Why “Perfect” AI Would Actually Suck

Sam Altman from OpenAI said something that resonates with our work: “If you want to look something up in a database, we already have good stuff for that. The magic is in the creativity.”

He’s right. If you ask an AI to write the next Harry Potter chapter, you don’t want it to regurgitate existing text. You want it to understand Rowling’s style, her world-building, and create something new that feels authentically Potter-esque.

Dr. Raul Villamarin Rodriguez

That’s hallucination. But it’s also creativity.

Through our collaboration, we’ve come to appreciate what Cornell researchers discovered: this “creative hallucination” is built into how language models work. You can’t eliminate it without destroying what makes these systems useful.

The same mental processes that let you imagine solutions to problems, write stories, or have those random shower thoughts also make you misremember movies, confuse details from different events, and occasionally believe things that aren’t quite true.

This isn’t broken intelligence. It’s just intelligence, period.

Working With AI’s Creative Side

Once you understand this, using AI becomes more strategic. In our workshops, we’ve learned you just need to know when you want creativity versus facts.

Want creativity? Ask it to brainstorm, write stories, or solve problems in novel ways. We’ve seen incredible results when students use AI for creative writing.

Need accuracy? Be specific about wanting verified information. We tell colleagues to treat it like that brilliant colleague who tends to embellish—great for ideas, but fact-check the details.

We completely understand why people worry about AI making things up, especially in important contexts like healthcare, legal advice, or academic research. These concerns are valid—when AI confidently presents fictional information as fact, it can genuinely mislead people. The solution isn’t eliminating creativity, but developing systems that clearly signal when they’re being imaginative versus when they’re drawing from verified information.

We’ve started thinking of AI as a creative partner who needs context clues about when to be imaginative versus careful.

Learning to Dream Together

Here’s what strikes us: we get frustrated when AI makes stuff up, but we do it constantly. We misquote movies, tell stories that get better with each telling, and remember conversations differently than they happened.

Dr. Hemachandran K

The difference is we’ve learned to live with our own creative reconstructions because they’re usually helpful.

Working together on AI research, we’ve become fascinated by collaborative possibilities. Scientists use AI to generate hypotheses they’d never think of, then apply human judgment to evaluate which ones are worth pursuing. Artists collaborate with AI that generates infinite variations while they curate what matters.

We evolved creative, pattern-completing brains over perfect memory for good reasons. Maybe AI is heading down the same path—toward systems that can dream up new possibilities we couldn’t imagine alone.

So next time ChatGPT invents a tropical paradise that doesn’t exist, maybe don’t get annoyed. Thank it for showing you what intelligence actually looks like: the ability to take fragments of experience and weave them into something new.

In these AI “hallucinations,” we’re watching digital minds that create and imagine in ways that feel surprisingly… human. As researchers and educators, we believe the question isn’t how to stop them from dreaming—it’s whether we’re wise enough to dream alongside them.

By Dr. Hemachandran K (Vice Dean and Director of AI Research Centre, Woxsen University, Hyderabad, India)
& Dr. Raul Villamarin Rodriguez (Vice President, Woxsen University, Hyderabad, India)