Relations (1)
cross_type 1.00 — strongly supporting 1 fact
The concept of hallucination detection is the primary research focus of the study 'LLM-Check', which was published in the venue Advances in Neural Information Processing Systems as described in [1].
Facts (1)
Sources
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org 1 fact
referenceGaurang Sriramanan et al. (2024) developed 'LLM-Check', a method for investigating the detection of hallucinations in large language models, published in Advances in Neural Information Processing Systems, volume 37.