Relations (1)
Facts (4)
Sources
Phare LLM Benchmark: an analysis of hallucination in ... giskard.ai 1 fact
claimLarge Language Models are susceptible to the confidence level of the user's tone in a prompt; models are more likely to correct false information presented tentatively but are more likely to agree with false information presented confidently.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org 1 fact
claimLarge Language Models frequently disregard explicit brevity instructions, making the creation of an optimal, universally applicable prompt a non-trivial endeavor.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org 1 fact
claimIn the context of LLMs, a prompt functions as a semiotic gesture that carries interpretive intent, allowing the user to act as both a reader and a writer who shapes the model's generative orientation.
Automating hallucination detection with chain-of-thought reasoning amazon.science 1 fact
claimLarge language models generate responses based on the distribution of words associated with a prompt rather than searching validated databases, which results in a mix of real and potentially fictional information.