Relations (1)

cross_type 5.00 — strongly supporting 31 facts

Large Language Models are the primary subject of numerous research papers published as preprints on arXiv, as evidenced by the extensive list of studies exploring their capabilities, reasoning, and integration with knowledge graphs [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], and [21].

Facts (31)

Sources
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org arXiv 8 facts
referenceTianshu Wang, Xiaoyang Chen, Hongyu Lin, Xuanang Chen, Xianpei Han, Hao Wang, Zhenyu Zeng, and Le Sun investigated the use of large language models for entity matching in their 2024 arXiv preprint.
referenceRui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue, Jinghui Lu, and Irene Li developed Graphusion, a method leveraging large language models for scientific knowledge graph fusion and construction in NLP education, as described in their 2024 arXiv preprint.
referenceYejin Kim, Eojin Kang, Juae Kim, and H. Howie Huang authored 'Causal Reasoning in Large Language Models: A Knowledge Graph Approach', published as an arXiv preprint in October 2024.
referenceAnna Sofia Lippolis, Mohammad Javad Saeedizade, Robin Keskisärkkä, Sara Zuppiroli, Miguel Ceriani, Aldo Gangemi, Eva Blomqvist, and Andrea Giovanni Nuzzolese authored the paper 'Ontology Generation using Large Language Models,' which was published as an arXiv preprint in March 2025.
referencePatricia Mateiu and Adrian Groza authored the paper 'Ontology engineering with Large Language Models,' which was published as an arXiv preprint in July 2023.
referenceGerard Pons, Besim Bilalli, and Anna Queralt published 'Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation' as an arXiv preprint in 2025.
referenceSamira Khorshidi, Azadeh Nikfarjam, Suprita Shankar, Yisi Sang, Yash Govind, Hyun Jang, Ali Kasgari, Alexis McClimans, Mohamed Soliman, Vishnu Konda, Ahmed Fakhry, and Xiaoguang Qi authored 'ODKE+: Ontology-Guided Open-Domain Knowledge Extraction with LLMs', published as an arXiv preprint in September 2025.
referenceJunming Liu, Siyuan Meng, Yanting Gao, Song Mao, Pinlong Cai, Guohang Yan, Yirong Chen, Zilin Bian, Ding Wang, and Botian Shi authored the paper 'Aligning Vision to Language: Annotation-Free Multimodal Knowledge Graph Construction for Enhanced LLMs Reasoning,' which was published as an arXiv preprint in July 2025.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 7 facts
referenceThe paper 'Deepseek-r1: incentivizing reasoning capability in llms via reinforcement learning' (arXiv:2501.12948) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning capabilities.
referenceThe paper 'Connecting large language models with evolutionary algorithms yields powerful prompt optimizers' (arXiv:2309.08532) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding prompt optimization.
referenceThe paper 'Trustllm: trustworthiness in large language models' is an arXiv preprint, identified as arXiv:2401.05561.
referenceThe paper 'Evaluating large language models: a comprehensive survey' (arXiv:2310.19736) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding LLM evaluation.
referenceThe paper 'How close is chatgpt to human experts? comparison corpus, evaluation, and detection' (arXiv:2301.07597) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding LLM evaluation.
referenceThe paper 'Training large language models to reason in a continuous latent space' (arXiv:2412.06769) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding reasoning.
referenceThe paper 'Entropy-memorization law: evaluating memorization difficulty of data in llms' is an arXiv preprint, identified as arXiv:2507.06056.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv 5 facts
referenceYuzhe Zhang, Yipeng Zhang, Yidong Gan, Lina Yao, and Chen Wang authored the paper 'Causal graph discovery with retrieval-augmented generation based large language models', published as arXiv preprint arXiv:2402.15301 in 2024.
referenceQingyu Tan, Hwee Tou Ng, and Lidong Bing authored the paper 'Towards benchmarking and improving the temporal reasoning capability of large language models', published as arXiv preprint arXiv:2306.08952 in 2023.
referenceYuwei Xia, Ding Wang, Qiang Liu, Liang Wang, Shu Wu, and Xiaoyu Zhang authored the paper 'Enhancing temporal knowledge graph forecasting with large language models via chain-of-history reasoning', published as arXiv preprint arXiv:2402.14382 in 2024.
referenceFei Wang, Xingchen Wan, Ruoxi Sun, Jiefeng Chen, and Sercan Ö Arık authored the paper 'Astute rag: Overcoming imperfect retrieval augmentation and knowledge conflicts for large language models', published as arXiv preprint arXiv:2410.07176 in 2024.
referenceSiheng Xiong, Ali Payani, Ramana Kompella, and Faramarz Fekri authored the paper 'Large language models can learn temporal reasoning', published as arXiv preprint arXiv:2401.06853 in 2024.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 3 facts
referenceThe paper 'Fact Finder -- Enhancing Domain Expertise of Large Language Models by Incorporating Knowledge Graphs' (arXiv, 2024) discusses incorporating knowledge graphs to enhance the domain expertise of Large Language Models.
referenceThe paper 'KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation' (arXiv, 2024) explores the use of Knowledge Augmented Generation to improve Large Language Models in professional domains.
referenceThe paper 'An Empirical Study over Open-ended Question Answering' (arXiv, 2024) investigates the OKGQA framework for Large Language Models and Knowledge Graphs in question answering.
Bridging the Gap Between LLMs and Evolving Medical Knowledge arxiv.org arXiv 2 facts
referenceRui Yang et al. (2024) published 'Kg-rank: Enhancing large language models for medical qa with knowledge graphs and ranking techniques' as an arXiv preprint (arXiv:2403.05881), which proposes using knowledge graphs and ranking to improve medical QA.
referenceHongjian Zhou et al. (2023) published 'A survey of large language models in medicine: Progress, application, and challenge' as an arXiv preprint (arXiv:2311.05112).
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 1 fact
referenceZhao WX, Zhou K, Li J, Tang T, Wang X, Hou Y, Min Y, Zhang B, Zhang J, Dong Z et al. published 'A survey of large language models' as an arXiv preprint (arXiv:2303.18223) in 2023.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
referenceWenchao Dong, Assem Zhunis, Dongyoung Jeong, Hyojin Chin, Jiyoung Han, and Meeyoung Cha authored 'Persona setting pitfall: Persistent outgroup biases in large language models arising from social identity adoption', published as an arXiv preprint in 2024.
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org arXiv 1 fact
procedureThe authors of 'Combining Knowledge Graphs and Large Language Models' conducted a review of literature published between 2019 and 2024, searching arXiv from February 2024 to May 2024 for articles related to LLMs and KGs.
Leveraging Knowledge Graphs and LLM Reasoning to Identify ... arxiv.org arXiv 1 fact
referenceThe paper 'A survey of large language models' by Wayne Xin Zhao et al. was published as an arXiv preprint in 2023.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org arXiv 1 fact
referenceEhud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. authored 'MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning', published as an arXiv preprint (arXiv:2205.00445) in 2022.
LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai Zylos 1 fact
referenceThe paper 'Predictive Coding and Information Bottleneck for Hallucination Detection,' published on arXiv, explores using predictive coding and information bottleneck principles to detect hallucinations in large language models.