claim
HalluMeasure classifies LLM hallucinations using a novel set of error types derived from linguistic patterns, which goes beyond binary classification or standard natural-language-inference (NLI) categories like support, refute, and not enough information.
Authors
Sources
- Automating hallucination detection with chain-of-thought reasoning www.amazon.science via serper
Referenced by nodes (1)
- natural language inference (NLI) concept