temporal reasoning
Facts (18)
Sources
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org Mar 18, 2025 7 facts
claimIn the KG-IRAG study, F1 Score and Hit Rate metrics are excluded for the Q1 dataset because it contains less temporal reasoning compared to the Q2 and Q3 datasets.
referenceTemporal reasoning in natural language processing (NLP) is categorized into three areas: temporal expression detection and normalization, temporal relation extraction, and event forecasting.
claimThe KG-IRAG system addresses two limitations in current GraphRAG methods: (1) few methods address queries highly dependent on temporal reasoning, and (2) no existing temporal QA dataset requires consecutive retrieval of uncertain amounts of data from a temporal knowledge base.
referenceQingyu Tan, Hwee Tou Ng, and Lidong Bing authored the paper 'Towards benchmarking and improving the temporal reasoning capability of large language models', published as arXiv preprint arXiv:2306.08952 in 2023.
claimLarge Language Models struggle to determine correct answers for temporal reasoning tasks (such as finding the earliest or latest time to adjust a plan) when all data is fed into the model within a single prompt, even when provided with background knowledge.
claimThe KG-IRAG framework was evaluated using three new datasets: weatherQA-Irish, weatherQA-Sydney, and trafficQA-TFNSW, which are designed to test Large Language Models on time-sensitive and event-based queries requiring temporal reasoning and logical inference.
referenceSiheng Xiong, Ali Payani, Ramana Kompella, and Faramarz Fekri authored the paper 'Large language models can learn temporal reasoning', published as arXiv preprint arXiv:2401.06853 in 2024.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org Sep 22, 2025 3 facts
referenceTimeR4 (Qian et al., 2024) improves the accuracy of large language models in answering temporal questions by introducing a Retrieve-Retrieve-Rerank pipeline that augments temporal reasoning through temporal knowledge-based fine-tuning.
referenceGenTKGQA (Gao et al., 2024) utilizes a temporal graph neural network (GNN) and virtual knowledge indicators to capture temporal knowledge embeddings, dynamically integrating retrieved subgraphs into large language models for temporal reasoning.
referenceRuiyi Yang et al. (2025) proposed KG-IRAG, a knowledge graph-based iterative retrieval-augmented generation framework designed for temporal reasoning.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 2 facts
claimDynamic knowledge maintenance is a universal challenge in AI systems, involving the timeliness of Knowledge Graph (KG) updates, limitations of temporal reasoning in Large Language Models (LLMs), and real-time processing constraints.
referenceXiong et al. (2024) demonstrated that large language models can learn temporal reasoning.
Unknown source 1 fact
claimKG-IRAG is a Knowledge Graph-Based Iterative Retrieval-Augmented Generation framework designed for temporal reasoning.
Automating hallucination detection with chain-of-thought reasoning amazon.science 1 fact
claimTemporal reasoning is a proposed error label type in HalluMeasure that identifies hallucinations where an LLM incorrectly states that an innovation is currently in use when the reference context specifies it will be used in the future.
KR 2026 : 23rd International Conference on Principles of ... - WikiCFP wikicfp.com 1 fact
claimThe 23rd International Conference on Principles of Knowledge Representation and Reasoning (KR 2026) covers research topics including argumentation, belief change, common-sense reasoning, computational aspects of knowledge representation, description logics, ethical considerations in knowledge representation, explanation, abduction and diagnosis, geometric, spatial, and temporal reasoning, inconsistency- and exception-tolerant reasoning, knowledge acquisition, knowledge compilation, automated reasoning, satisfiability and model counting, knowledge representation languages, logic programming, answer set programming, model learning for diagnosis and planning, modeling and reasoning about preferences, modeling constraints and constraint solving, multi- and order-sorted representations and reasoning, non-monotonic logics, ontologies and knowledge-enriched data management, philosophical foundations of knowledge representation, qualitative reasoning, reasoning about actions and change, action languages, reasoning about knowledge, beliefs, and other mental attitudes, reasoning in knowledge graphs, reasoning in multi-agent systems, semantic web, similarity-based and contextual reasoning, and uncertainty and vagueness.
Call for Papers: Main Track - KR 2026 kr.org 1 fact
claimThe KR 2026 conference accepts submissions on topics including argumentation, belief change, common-sense reasoning, computational aspects of knowledge representation, description logics, ethical considerations in KR, explanation/abduction/diagnosis, geometric/spatial/temporal reasoning, inconsistency- and exception-tolerant reasoning, knowledge acquisition, knowledge compilation/automated reasoning/satisfiability/model counting, knowledge representation languages, logic programming/answer set programming, model learning for diagnosis and planning, modeling and reasoning about preferences, modeling constraints and constraint solving, multi- and order-sorted representations and reasoning, non-monotonic logics, ontologies and knowledge-enriched data management, philosophical foundations of KR, qualitative reasoning, reasoning about actions and change/action languages, reasoning about knowledge/beliefs/mental attitudes, reasoning in knowledge graphs, reasoning in multi-agent systems, semantic web, similarity-based and contextual reasoning, and uncertainty and vagueness.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org Nov 2, 2025 1 fact
measurementPhysician audits confirmed that 64–72% of residual hallucinations in foundation models stemmed from causal or temporal reasoning failures rather than knowledge gaps.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org Feb 16, 2025 1 fact
referenceBen Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth authored 'Temporal reasoning on implicit events from distant supervision', published as an arXiv preprint (arXiv:2010.12753) in 2020.