Skip to Main Content
QUL logo

Artificial Intelligence

Artifical Intelligence and the research process

Hallucinations

Hallucinations are:

  • a “phenomenon wherein a large language model (LLM)…perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” [1]
  • “incorrect or misleading results that AI models generate…caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.” [2]

Some research has shown that LLMs can hallucinate as much as 27% of the time and that factual errors are found in 46% of the output. [3]

Even AI tools and LLMs developed from “real” data (i.e., academic publications) can hallucinate and output should always be verified before being used
 

[1] https://www.ibm.com/topics/ai-hallucinations
[2] https://cloud.google.com/discover/what-are-ai-hallucinations
[3] https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html and https://doi.org/10.1016/j.nlp.2023.100024