Relations (1)
related 2.58 — strongly supporting 5 facts
Large Language Models are directly linked to temporal reasoning through research papers that benchmark and improve this capability, as evidenced by [1], [2], and [3]. Furthermore, specialized frameworks like TimeR4 [4] and GenTKGQA [5] have been developed to specifically augment and integrate temporal reasoning into Large Language Models.
Facts (5)
Sources
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org 2 facts
referenceQingyu Tan, Hwee Tou Ng, and Lidong Bing authored the paper 'Towards benchmarking and improving the temporal reasoning capability of large language models', published as arXiv preprint arXiv:2306.08952 in 2023.
referenceSiheng Xiong, Ali Payani, Ramana Kompella, and Faramarz Fekri authored the paper 'Large language models can learn temporal reasoning', published as arXiv preprint arXiv:2401.06853 in 2024.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org 2 facts
referenceTimeR4 (Qian et al., 2024) improves the accuracy of large language models in answering temporal questions by introducing a Retrieve-Retrieve-Rerank pipeline that augments temporal reasoning through temporal knowledge-based fine-tuning.
referenceGenTKGQA (Gao et al., 2024) utilizes a temporal graph neural network (GNN) and virtual knowledge indicators to capture temporal knowledge embeddings, dynamically integrating retrieved subgraphs into large language models for temporal reasoning.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 1 fact
referenceXiong et al. (2024) demonstrated that large language models can learn temporal reasoning.