measurement
The authors of the study 'A framework to assess clinical safety and hallucination rates of LLMs' observed a 1.47% hallucination rate and a 3.45% omission rate in their evaluation of Large Language Models.
Referenced by nodes (1)
- Large Language Models concept