Relations (1)

related 1.00 — strongly supporting 1 fact

Large Language Models are related to the concept of mind through the BAFH framework, which evaluates the belief states of models like Gemma-2 and Llama-3.1 against the MIND baseline as described in [1].

Facts (1)

Sources
EdinburghNLP/awesome-hallucination-detection - GitHub github.com GitHub 1 fact
procedureThe BAFH framework is a lightweight method that trains a feedforward classifier on hidden states of Large Language Models to determine belief states and classify hallucination types, as evaluated against MIND and SAR baselines using Gemma-2, Llama-3.1, and Mistral models.