Do you have one of those friends that you really like, but you don’t necessarily believe everything they tell you? Their heart is in the right place, but they just try too hard to impress you. They want to entertain, and they want you to like them, so they embellish the truth every now and again, and in the end, you don’t know what to believe… In the UK, we’d say they’re a bit of a gobshite. Well, artificial intelligence hallucinations are sort of the tech equivalent.
It almost sounds like something from a sci-fi movie, like when HAL lies about the ship components in 2001, but AI hallucinations are very real. If you’re using AI, or planning to, then you need to understand a bit about hallucinations and their implications.
What Exactly Are AI Hallucinations?
Picture the scene… You’re chatting with an AI, asking it to provide information or generate content. However, instead of sticking strictly to facts, the AI starts creating details and events that don’t actually exist—it’s just plain making stuff up.
It’s not that the AI is exactly lying – more like it is dreaming up a convincing reality (like when someone hallucinates) that it believes is real but is completely fabricated.
This isn’t a sign that AIs are becoming sentient; it’s the opposite. Hallucinations are a quirk of how language models learn and generate responses, and they occur because AI models are not at all sentient and have zero understanding of underlying concepts.
Chatbots are trained on massive datasets, learning patterns, and associations between words and concepts, but they have no context or genuine intelligence. They don’t try to make sense of anything they learn.
AI’s are great at predicting the next word in a sequence but they don’t know truth from fiction.
Why Do AIs Hallucinate?
There are many and varied reasons why AIs hallucinate, but there are a few causes that are more common than others. Understanding these is the first step in remedying, or at least mitigating, their impact.
Here are a few of the main causes of AI hallucinations:
Overfitting: Overfitting occurs when an AI model is too closely tailored to the training data. This makes the AI great at recalling information from those datasets but rubbish at providing general answers. This can lead to the AI “hallucinating” details that were present in its training data but aren’t universally true or applicable in other contexts.
Underfitting: Conversely, underfitting happens when the AI is too simplistic and hasn’t learned enough from its training data. This lack of understanding can lead to AI making incorrect assumptions or generating overly simplistic and often incorrect outputs.
Complexity of Language and Context: Language is amazing. It’s inherently complex and nuanced, filled with all sorts of idioms, metaphors, and cultural references. Unfortunately, AI doesn’t have this context (yet?) and can’t get a handle on linguistic subtleties. This leads AIs to misinterpret info and hallucinate as they try to fill in gaps in their understanding with incorrect or nonsensical information.
Imperfect Training Data: AI learns from vast datasets, which are essentially reflections of the real world, warts and all. If the data is biased, incomplete, or contains errors, the AI can learn these flaws. Just like a student might pick up misinformation from a faulty textbook, AI can output duff information based on the inaccuracies it’s been fed. While this might not be technically hallucinating as such, it still produces wonky results.
Embracing Hallucinations
Here’s where it gets really interesting: can we turn these AI hallucinations from a bug into a feature? Absolutely! With a bit of creativity, these quirks can be transformed into powerful tools for innovation and problem-solving.
- Creative Catalyst: Use AI’s off-the-wall outputs as a springboard for lateral/blue-sky/out-of-the-box thinking. Sometimes, you need crazy ideas to spark novel solutions.
- Stress-testing Scenarios: Hallucinations can help identify the limits of AI’s understanding, acting as a perfect test case for improving its learning algorithms.
- Entertainment and Engagement: AI hallucinations can add unique and unpredictable elements to narratives, stories, games, etc. They’re a great way to get outside inspiration.
How to Mitigate Against AI Hallucinations
Understanding AI hallucinations is crucial for anyone using or planning to use AIs. AI-generated content is becoming indistinguishable from human-generated content, and it can be convincingly believable.
Even just knowing that AI can hallucinate gives you a fighting chance of nipping it in the bud before it goes any further.
When you engage with AI platforms, do so with a critical eye. Experiment with generating content or asking for information on topics you’re familiar with to see how the AI responds. Pay attention to where it excels and where it falters. This hands-on experience will help you develop an intuition for when an AI might be tripping.
A few tips for dealing with AI hallucinations:
- Always verify AI-generated information against trusted sources, especially if you use it for decision-making.
- Experiment with different prompts to understand how slight changes affect the AI’s output, revealing insights into its thought process.
- Once you understand hallucinations, you can embrace their creative potential, but you need to know when they occur.
AI is amazing, and it’s an exciting time to be involved, but hallucinations are a reminder of the limitations and the (thankfully) ongoing need for human interaction.





Leave a comment