concept

context

Facts (21)

Sources
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io The Moonlight 4 facts
procedureThe GraphEval framework detects hallucinations by using a pretrained Natural Language Inference (NLI) model to compare each triple in the constructed Knowledge Graph against the original context, flagging a triple as a hallucination if the NLI model predicts inconsistency with a probability score greater than 0.5.
procedureThe GraphCorrect strategy rectifies hallucinations by identifying inconsistent triples, sending the problematic triple and context back to an LLM to generate a corrected version, and substituting the new triple into the original output to ensure localized correction without altering unaffected sections.
claimThe authors of the GraphEval framework focus on detecting hallucinations within a defined context rather than identifying discrepancies between LLM responses and broader training data.
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aritra Biswas, Noé Vernier · Datadog Aug 25, 2025 4 facts
claimIn Datadog's chain-of-thought prompts and rubrics, referring to the context as 'expert advice' and the answer as a 'candidate answer' creates an asymmetry that frames the context as the definitive source of truth.
claimFaithfulness evaluation assumes the provided context is correct and acts as ground truth; verifying the accuracy of the context itself is considered an independent problem.
claimSLM-as-a-judge approaches for hallucination detection often fail in complex use cases, particularly when the context and answer are large and involve layers of reasoning.
claimDatadog classifies disagreements between an LLM-generated answer and the provided context into two types: contradictions, which are claims that go directly against the context, and unsupported claims, which are parts of the answer not grounded in the context.
How Enterprise AI, powered by Knowledge Graphs, is ... blog.metaphacts.com metaphacts Oct 7, 2025 3 facts
procedureThe 'decision transformation' process for business intelligence follows a predictable journey consisting of three steps: (1) Data + context = information, (2) Information + meaning = knowledge, (3) Knowledge + action = decision.
claimIn the 'decision transformation' framework, data is defined as raw facts such as customer transactions, sensor readings, and financial records, which require context to become information.
claimIn the 'decision transformation' framework, context connects raw data to its business environment, allowing data to be interpreted (e.g., determining if a sales increase is seasonal, regional, or competitive).
Detect hallucinations for RAG-based systems - AWS aws.amazon.com Amazon Web Services May 16, 2025 2 facts
procedureA RAG-based hallucination detection system requires the storage of three specific data components: the context (text relevant to the user's query), the question (the user's query), and the answer (the response provided by the LLM).
claimIn the AWS hallucination detection method, the hallucination score is a float between 0 and 1, where 0 indicates the sentence is directly based on the context and 1 indicates the sentence has no basis in the context.
Open source software best practices and supply chain risk ... - GOV.UK gov.uk Department for Science, Innovation and Technology Mar 3, 2025 1 fact
measurementThe scoring methodology for Open Source Software (OSS) assessment assigned scores of 0 (non-existent), 0.33 (basic), 0.66 (intermediate), or 1 (comprehensive) to four aspects: adoption, management, community engagement, and context.
Self-awareness, self-regulation, and self-transcendence (S-ART) frontiersin.org Frontiers in Human Neuroscience 1 fact
referenceRichard J. Davidson, D. C. Jackson, and Ned H. Kalin authored the 2000 paper 'Emotion, plasticity, context, and regulation: perspectives from affective neuroscience', published in Psychological Bulletin.
Day-5 | Anu Anuja - LinkedIn linkedin.com Anu Anuja · LinkedIn Feb 20, 2026 1 fact
claimThe 'context' roadblock in HealthTech AI occurs when models perform well on curated data but struggle with real-world variability, incomplete records, inconsistent inputs, or workflows that deviate from expected paths.
Top 10 Use Cases: Knowledge Graphs - Neo4j neo4j.com Neo4j Feb 1, 2021 1 fact
claimSearch systems fail to provide precise results when they lack the context of relationships and metadata.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com TTMS Feb 10, 2026 1 fact
claimIn multi-turn interactions, LLMs may experience inconsistencies and drift, where the model contradicts itself or loses track of context, potentially frustrating users and degrading trust.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org arXiv Nov 7, 2024 1 fact
claimWhen designing AI explanations, developers should pay attention to four components: perception, semantics, intention, user, and context.
Automating hallucination detection with chain-of-thought reasoning amazon.science Amazon Science 1 fact
procedureThe HalluMeasure procedure for measuring hallucinations consists of the following steps: (1) decompose the LLM response into a set of claims using a claim extraction model; (2) classify the claims into five classes (supported, absent, contradicted, partially supported, and unevaluatable) by comparing them to retrieved context; (3) classify the claims into 10 distinct linguistic-error types (e.g., entity, temporal, and overgeneralization); (4) calculate an aggregated hallucination score based on the rate of unsupported claims and the distribution of error types.
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org llmmodels.org May 10, 2024 1 fact
claimProviding context to a prompt reduces the likelihood of inaccurate or irrelevant responses from an LLM.