prompts
Facts (16)
Sources
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com Feb 10, 2026 3 facts
claimElastic has developed an LLM observability module that collects prompts, responses, latency metrics, and safety signals into Elasticsearch indices for organizations using the Elastic Stack.
procedureMost teams implement LLM observability by logging prompts and responses, and capturing metadata such as model version, parameters like temperature, and safety filter flags.
claimDatadog allows end-to-end tracing of AI requests, capturing prompts and responses as spans, logging token usage and latency, and evaluating outputs for quality or safety issues.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 3 facts
referenceThe paper 'Unveiling and manipulating prompt influence in large language models' examines how prompts influence model outputs and how this influence can be manipulated.
referenceThe paper 'Do prompt-based models really understand the meaning of their prompts?' analyzes the semantic understanding of prompts in prompt-based models.
claimOymak et al. (2023) characterize how gradient descent naturally guides prompts to focus on sparse, task-relevant tokens.
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org 2 facts
claimEven in well-trained and high-performing AI models, hallucinations may arise due to input perturbations or suboptimal prompts.
claimCarefully formulated prompts that clearly define response boundaries and expectations help reduce ambiguity and guide AI models toward more precise and reliable outputs.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aug 25, 2025 2 facts
claimPrompts are used to augment labeled data with reasoning chains for supervised fine-tuning (SFT) or in SFT initialization steps before reinforcement learning (RL).
procedureDatadog's approach to hallucination detection involves enforcing structured output and guiding reasoning through explicit prompts.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org Sep 22, 2025 1 fact
referenceStraGo, proposed by Wu et al. (2024c), enhances the quality and stability of prompts by using in-context learning to apply insights and strategic guidance learned from historical prompts.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Nov 4, 2024 1 fact
perspectiveThe reliance on well-designed prompts can limit an LLM's flexibility in handling various inputs because designing effective prompts requires a deep understanding of the specific task and the model's behavior.
Unknown source 1 fact
claimLarge language models (LLMs) can experience model-intrinsic hallucinations due to limitations in training data and architectural biases, even when well-organized prompts are used.
LLM-Powered Knowledge Graphs for Enterprise Intelligence and ... arxiv.org Mar 11, 2025 1 fact
claimExperiments comparing entity extraction methods demonstrated that using enriched contextual information significantly outperforms methods relying on basic prompts or few-shot examples.
The construction and refined extraction techniques of knowledge ... nature.com Feb 10, 2026 1 fact
claimIn the study 'The construction and refined extraction techniques of knowledge', external partners access de-identified corpora under data-use agreements, while model artifacts, prompts, and code are shareable and raw data remain on-premise.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org Jul 1, 2025 1 fact
claimThe operational semiotic framework suggests that designers should treat prompts not as commands, but as curatorial acts that intervene in an interpretive ecology consisting of user intention, model architecture, training data, and cultural background.