Artificial Intelligence

AI “Hallucination” A Misleading Term

The term “hallucination” in reference to generative AI has never struck me as being right, and I just came across an article on Bloomberg that does a good job of explaining why.

Calling incorrect facts generated by AI “hallucinations” incorrectly personifies LLMs, which have no concept of mind or any ability to “hallucinate” in a human sense.

Further, LLM are simply doing what they were built to do. The fundamental way in which LLMs work is simply responding to a prompt with the most likely words or images based on a probability mapping generated through its training.

LLMs have no concept of the real world and are ALWAYS making things up, and sometimes those things happen to be true.

A choice quote from the article:

The term “hallucinate” obscures what’s really going on. It also serves to absolve the systems’ creators from taking responsibility for their products. (Oh, it’s not our fault, it’s just hallucinating!)

And it’s not clear that there is ready solution to LLMs confidently responding with bullshit.

With the recent additions of web browsing and plug-ins, ChatGPT can at least verify facts for things like math and basic science with reliable sources, but not everything in the world has clear cut answer that everyone agrees on.

Philosophers have long thought about what about the world is knowable, and what knowledge can be proven to be unquestionably true, in the field of Epistemology. The answer, when you look hard enough, is not that much.

The best we can hope for, then, is for these systems to provide the different views on a subject, their reasoning and some citations. It will still take critical thinking on part of the reader to decide what is true and what is a “hallucination”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.