

How much do large language models actually hallucinate when answering questions grounded in provided documents?
Okay, this is looking promising, at least in terms of the most important qualifications being plainly stated in the opening line.
Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions focusing on specific documentation. So of course the “in the wild” questions will see a higher rate.
This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information, rather than letting them trawl world+dog.
Very interesting!

As I pointed out in another root comment, the average - depending on the model being tested - tends to sit between 60% and 80%. But this is with no restriction on source materials… the LLMs are essentially pulling from world+dog in that case
So this opens up an interesting option for users, in that hallucinations/inaccuracies can be controlled for and potentially reduced by as much as ⅔ simply by restricting the model to those documents/resources that the user is absolutely certain contains the correct answer.
I mean, 25% is still stupidly high. In any prior era, even 2.5% would have been an unacceptably high error rate for a business to stomach. But source-restriction seems to be a somewhat promising guardrail to use for the average user doing personal work.