What Is an AI Hallucination? Causes and Prevention Tips (2024)

Imagine you hired a new administrative assistant who’s fluent in more than 20 languages, holds more advanced degrees than you can count, and seems to have read everything. They’re also willing to work for a salary you can easily afford. The only problem? They’re completely unreliable up to a quarter of the time.

Welcome to the current state of artificial intelligence (AI). Despite being sophisticated technologies and powerful drivers of business growth, today’s AI tools get things wrong regularly. A data analytics startup found AI chatbots make up the facts between 3% and 27% of the time, with these misfires called “hallucinations.”

Here’s more about what AI hallucinations are and why they happen, plus five tips to help you prevent them. 

What are AI hallucinations?

An AI hallucination occurs when a generative AI tool provides fabricated, irrelevant, false, or misleading information in response to a user’s prompt.

AI hallucinations can take various forms:

  • Factual errors. This type of hallucination occurs when AI-generated content contains incorrect information, including false statements and misleading outputs.
  • Fabrications. Fabrication errors happen when an AI model generates content based on patterns that don’t exist in the model’s source materials. These errors effectively introduce new data and present it as fact.
  • Irrelevance. AI tools can provide users with factually correct information that’s present in the tool’s source data but doesn’t correspond with a user’s prompt. This type of mistake is a relevance error.

Some AI experts argue factual errors and relevance issues aren’t true AI hallucinations, citing psychology’s definition of a hallucination as the experience of something that doesn’t exist. These experts classify fabrications as hallucinations and other errors as mistakes, pointing out that only fabrications introduce new data based on nonexistent patterns. However, other experts classify all three error types as hallucinations, since they all result from an AI model’s logical mistakes and present significant challenges to AI adoption. 

3 components of AI systems

  1. Machine learning models
  2. Large language model
  3. Generative models

To understand what causes AI hallucinations, first understand how three different AI systems work: 

1. Machine learning models

AI tools run on machine learning models. These programs can find and express patterns or make decisions or predictions using datasets they’ve never processed before. For example, you might train a machine learning model to distinguish between photographs that do and don’t include people. 

First, you undertake model training: feeding the program images you’ve already tagged as containing people or not containing people. Eventually, you provide it with an untagged set of images, and the program sorts the images into the two categories on its own using the rules and data structures it developed during model training.

2. Large language models

AI tools that generate text run on a large language model (LLM). LLMs express the patterns in a text-based training data set. 

To build an LLM, developers create a deep learning algorithm and feed it a large volume of source text. The algorithm identifies patterns in the source data and writes rules that let it reproduce those patterns. This collection of saved rules becomes the LLM. 

Applications of LLMs include programs for generating synopses of very long texts or recording key topics during meetings, as well as customer service chatbots.

3. Generative models

Generative AI refers to AI tools that can generate content. These tools run on a generative model, a specific type of machine learning model.

Generative models also go by the terms “deep generative models” (because they use a more complex architecture than non-generative models) or “deep learning models” (because deep learning mimics the structures of the human brain).

Generative AI tools might create images or music or use an LLM to generate text, applying a statistical and rule-based decision-making process that predicts the next word in a sentence. 

What causes AI hallucinations?

Here are some common causes of AI hallucinations:

Low-quality training data

AI text generators don’t think; they reproduce the patterns in their training data. Biased, incorrect, or insufficient training data can cause an AI text generator to fabricate responses or generate errors. For example, if an AI model is trained on a dataset that includes information about only male dentists, the AI may falsely claim all dentists are male. 

External data issues

Some newer content generators can access the internet, which allows them to supplement their training data sets with more current information. However, this approach can introduce inaccurate information into the model’s source data, since AI tools can’t verify the validity of the information and developers can’t vet newly accessed sources. 

For example, Google’s AI responded to a user query for fruits ending in “um” with “applum, bananum, strawberrum, tomatum and coconut,” a nonsensical output based on a user post to the forum Quora. 

Faulty models

Developers build generative AI models to contain coded “assumptions” that direct the model on how to process data and respond to user prompts. When a user asks a question, the model references these developer-embedded assumptions and the rules the model’s machine learning algorithm wrote during its model training phase. 

Generative models can hallucinate if they encounter a contradiction between these two sets of rules. Poorly designed models or large numbers of assumptions can increase the likelihood of this outcome. 

Bad prompts

Idiomatic prompts or prompts that ask an AI to perform a task outside its purview can cause hallucinations, as can prompts that are too vague, too broad, unrealistic, unethical, or lack context. If you ask an AI tool if it’s sweater weather, for example, the model may misinterpret the idiom and generate a nonsense response. 

AI tools can also produce hallucinations in response to adversarial attacks, which are inputs intentionally designed to deceive the model.

5 tips for preventing AI hallucinations

  1. Use the tool for its intended purpose
  2. Be clear
  3. Break prompts into steps
  4. Provide parameters
  5. Verify information

You can’t eliminate the possibility an AI model will hallucinate, but you can reduce the likelihood of errors by following best practices for AI use. Here are five tips to help you prevent AI hallucinations:

1. Use the tool for its intended purpose

Many generative AI tools have a narrow purpose, meaning the model’s architecture, assumptions, and training data prepare it to perform a specific type of task. You’re more likely to receive relevant and accurate information by selecting the right tool for your project. 

For example, if you ask a scientific research tool to write code, a code generation tool to cite case law, or a legal citation tool to reference scientific writing, the tool will likely refuse your query or generate false information.

2. Be clear

Clear, direct prompts help AI tools produce relevant, factually correct answers. For example, instead of prompting a tool to generate “a list of San Francisco New York attorneys,” you might ask for “a list of law firms with offices in both San Francisco and New York.” The first query could potentially result in a list of attorneys in either city, a refusal to answer, or a fabrication. 

Avoid using idiomatic expressions or slang, since models can incorrectly identify the meaning of less common words and phrases.

3. Break prompts into steps

AI tools can get things wrong, and the more complex a prompt, the greater the opportunity for a tool to hallucinate. You can increase the accuracy of the content generated and improve your ability to vet responses by breaking your prompts down into steps. 

Imagine you own an ecommerce pet food company and ask an AI tool how to increase your earnings. It tells you to become a radiologist—a potentially viable but likely irrelevant suggestion. Because of the nature of your prompt, you can’t tell how the tool arrived at its conclusion. A better approach might involve breaking the prompt down as follows:

1. Ask the tool to generate a list of categories it could use to group the observed trends in your customer feedback.

2. Ask the tool to identify factors that drive purchase decisions in your industry and rank them from most to least influential.

3. Ask the tool to recommend categories to organize your review data based on the previous two responses.

4. Ask the tool to identify your strengths and weaknesses for each recommended category according to your customer reviews.

5. Ask the tool to recommend changes that can help you increase your sales based on its previously generated information.

This approach decreases the likelihood your AI tool will hallucinate. It also gives you insight into the tool’s problem-solving methodology, allowing you to spot hallucinations and course-correct before the tool arrives at recommendations. 

4. Provide parameters 

You can use examples, source restrictions, or other parameters to help AI tools return accurate, valuable information. For example, you might accompany a research query with the instruction to only reference peer-reviewed sources or disregard data from before 2008. You can also use data templates to control how AI tools process and organize information. 

5. Verify information 

Although fact checking doesn’t technically prevent AI hallucinations, human oversight is a critical element of responsible AI use, and it can help you avoid the real-world consequences of acting on bad information. Use third-party sources to double-check AI information. Ask AI tools to cite sources so you can vet them for authority and confirm that the AI outputs accurately represent the source material. 

AI hallucination FAQ

How often does AI hallucinate?

One study measured AI hallucination rates at 3% to 27%.

Can AI hallucinations be fixed?

Tech companies are experimenting with solutions to AI’s hallucination problem. OpenAI, for example, recently introduced an AI tool that reduces hallucination rates by supporting a technique called reinforcement learning with human feedback (RLHF).

What is an example of an AI hallucination?

In 2023, Google’s Bard stated that the James Webb Space Telescope (JWST) had captured the first picture of a planet outside of Earth’s solar system, despite the fact that a different telescope photographed a planet outside the solar system more than a decade before JWST’s launch.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *