prompting strategies
Also known as: prompting approach, prompting strategy, prompting strategies, prompting
Facts (18)
Sources
Combining large language models with enterprise knowledge graphs frontiersin.org Aug 26, 2024 5 facts
claimInaccurate Named Entity Recognition and Relation Extraction prompting results can be corrected through active learning techniques (Wu et al., 2022) or by distilling large Pre-trained Language Models into smaller models for specific tasks (Agrawal et al., 2022).
claimWang et al. (2023) address prompting hallucination issues by enriching prompts and reducing hallucinations via self-verification strategies.
claimPrompting in information extraction tasks faces hallucination issues, where models overconfidently label negative inputs as entities or relations.
claimGenerative models like ChatGPT can quickly become outdated or change unexpectedly, which compromises the reproducibility and efficiency of prompting techniques, according to Törnberg (2024).
procedurePrompting for Named Entity Recognition involves using entity definitions, questions, sentences, and output examples to guide Large Language Models in understanding entity types and extracting answers (Ashok and Lipton, 2023; Kholodna et al., 2024).
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org Jul 1, 2025 3 facts
claimPrompting is a micro-act of navigation within the semiosphere, representing a culturally embedded intervention shaped by the broader ecology of signs in which both the user and the LLM are immersed.
perspectivePrompting can be viewed as a form of semiospheric perturbation that has the potential to generate novel intersections across domains of meaning, such as when a prompt asks to explain thermodynamics using metaphors from fairy tales.
claimPrompting is not a neutral interface command but a site of semiotic contract where language, intention, and cultural codes converge to co-produce meaning.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aug 25, 2025 2 facts
Medical Hallucination in Foundation Models and Their ... medrxiv.org Mar 3, 2025 2 facts
claimThe detectability of medical hallucinations depends on the domain expertise of the audience and the quality of the prompting provided to the model; domain experts are more likely to identify subtle inaccuracies than non-experts, according to Asgari et al. (2024) and Liu et al. (2024).
claimPrompting strategies for hallucination mitigation in medical large language models employ distinct cognitive frameworks to enhance diagnostic reliability.
Survey and analysis of hallucinations in large language models frontiersin.org Sep 29, 2025 2 facts
formulaHallucination events in Large Language Models can be represented probabilistically as random events, where H denotes hallucination occurrence conditioned upon prompting strategy P and model characteristics M, expressed as P(P, M|H) = (P(H|P, M) * P(P, M)) / P(H).
claimThe paper 'Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior' was published in Frontiers in Artificial Intelligence on September 30, 2025, by authors Anh-Hoang D, Tran V, and Nguyen L-M.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 2 facts
referenceThe paper 'When do prompting and prefix-tuning work? a theory of capabilities and limitations' provides a theoretical analysis of the capabilities and limitations of prompting and prefix-tuning.
claimThe formulation of prompting by Kim et al. (2025b) clarifies expressivity and makes explicit the limits imposed by prompt length and precision.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com May 13, 2025 1 fact
procedureThe researchers conducted 18 iterative experiments testing combinations of prompting and workflow strategies, including structured prompting, atomisation, function calls, JSON-based outputs, an additional LLM revision step, and templating (SOAP - Subjective, Objective, Assessment, Plan).
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com Sep 1, 2025 1 fact
claimTechniques such as Retrieval-Augmented Generation (RAG), fact-checking pipelines, and improved prompting can significantly reduce, though not completely prevent, hallucinations in large language models.