A study conducted by the University of Oxford has developed a way to test for instances when an AI language model is "unsure" of what it is generating or is "hallucinating". This term refers to when a language model creates responses that, while fluent and plausible, are inconsistent and not based in truth.
The concept…
