prompt
Facts (17)
Sources
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org Jul 1, 2025 5 facts
claimPrompts function as semiotic acts that act as structuring interventions, framing the interpretive conditions of an LLM's output.
claimA prompt acts as a performative act that configures the communicative situation by determining the model's stance, voice, register, and imagined audience.
claimIn human-LLM interaction, the user assumes the role of the 'model reader' by constructing the prompt, interpreting the result, and positioning the output within a communicative or discursive frame.
claimIn the context of LLMs, a prompt functions as a semiotic gesture that carries interpretive intent, allowing the user to act as both a reader and a writer who shapes the model's generative orientation.
claimWhen a user crafts a prompt for an LLM, they are initiating a semiotic contract that embeds expectations regarding tone, register, genre, and ideological positioning.
Survey and analysis of hallucinations in large language models frontiersin.org Sep 29, 2025 4 facts
claimConsistent hallucinations across different models suggest prompt-induced errors, while divergent hallucination patterns imply architecture-specific behaviors or training artifacts.
claimA positive Joint Attribution Score (JAS) indicates that specific prompt-model combinations amplify hallucinations beyond what would be expected from individual prompt or model effects alone, suggesting the prompt and model jointly contribute to the error.
referenceBang and Madotto (2023) developed neural attribution predictors to identify whether a hallucination originates from the prompt or the model.
claimIf a hallucinated answer disappears when a question is asked more explicitly or via Chain-of-Thought, the cause is likely prompt-related; if the hallucination persists across all prompt variants, the cause likely lies in the model's internal behavior.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 2 facts
referenceThe paper 'On zero-initialized attention: optimal prompt and gating factor estimation' was published in the Proceedings of the 42nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 267, pp. 13713–13745.
referenceThe research paper 'Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing' was published as an arXiv preprint (arXiv:2307.03172) and cited in section 7.3.2 of the survey.
Phare LLM Benchmark: an analysis of hallucination in ... giskard.ai Apr 30, 2025 1 fact
claimLarge Language Models are susceptible to the confidence level of the user's tone in a prompt; models are more likely to correct false information presented tentatively but are more likely to agree with false information presented confidently.
Applying Large Language Models in Knowledge Graph-based ... arxiv.org Jan 7, 2025 1 fact
procedureThe biomedical concept linking approach developed by Wang et al. follows a two-stage procedure: (1) embedding biomedical concepts into the overall context via a prompt, and (2) performing similarity mapping to identify top candidates that match an input concept.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org Aug 13, 2025 1 fact
claimLarge Language Models frequently disregard explicit brevity instructions, making the creation of an optimal, universally applicable prompt a non-trivial endeavor.
Detect hallucinations for RAG-based systems - AWS aws.amazon.com May 16, 2025 1 fact
codeprompt = """
Human: You are an expert assistant helping human to check if statements are based on the context. Your task is to read context and statement and indicate which sentences in the statement are based directly on the context. Provide response as a number, where the number represents a hallucination score, which is a float between 0 and 1. Set the float to 0 if you are confident that the sentence is directly based on the context. Set the float to 1 if you are confident that the sentence is not based on the context. If you are not confident, set the score to a float number between 0 and 1. Higher numbers represent higher confidence that the sentence is not based on the context. Do not include any other information except for the the score in the response. There is no need to explain your thinking."""
Automating hallucination detection with chain-of-thought reasoning amazon.science 1 fact
claimLarge language models generate responses based on the distribution of words associated with a prompt rather than searching validated databases, which results in a mix of real and potentially fictional information.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com May 13, 2025 1 fact
claimModifying the prompt from the baseline used in Experiment 1 to include a style update used in Experiment 8 resulted in a reduction of both major and minor omissions, though it caused a slight increase in minor hallucinations.