Hallucinations are:
Some research has shown that LLMs can hallucinate as much as 27% of the time and that factual errors are found in 46% of the output. [3]
Even AI tools and LLMs developed from “real” data (i.e., academic publications) can hallucinate and output should always be verified before being used.
[1] https://www.ibm.com/topics/ai-hallucinations
[2] https://cloud.google.com/discover/what-are-ai-hallucinations
[3] https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html and https://doi.org/10.1016/j.nlp.2023.100024