Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models rely on a prompt to generate responses [1], interpret user intent [2], and determine the model's behavior based on the prompt's tone [3] or specific instructions [4].

Facts (4)

Sources
Phare LLM Benchmark: an analysis of hallucination in ... giskard.ai Giskard 1 fact
claimLarge Language Models are susceptible to the confidence level of the user's tone in a prompt; models are more likely to correct false information presented tentatively but are more likely to agree with false information presented confidently.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org arXiv 1 fact
claimLarge Language Models frequently disregard explicit brevity instructions, making the creation of an optimal, universally applicable prompt a non-trivial endeavor.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org arXiv 1 fact
claimIn the context of LLMs, a prompt functions as a semiotic gesture that carries interpretive intent, allowing the user to act as both a reader and a writer who shapes the model's generative orientation.
Automating hallucination detection with chain-of-thought reasoning amazon.science Amazon Science 1 fact
claimLarge language models generate responses based on the distribution of words associated with a prompt rather than searching validated databases, which results in a mix of real and potentially fictional information.