Relations (1)
related 0.60 — strongly supporting 6 facts
Large Language Models are frequently the subject of academic research papers published as preprints on ArXiv, as evidenced by studies on reasoning capabilities [1], [2], data contamination [3], attention mechanisms [4], retrieval-augmented generation [5], and hallucinations [6].
Facts (6)
Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org 3 facts
referenceThe paper 'Towards reasoning era: a survey of long chain-of-thought for reasoning large language models' is an arXiv preprint, identified as arXiv:2503.09567.
referenceThe paper 'Rethinking attention with performers' is an arXiv preprint (arXiv:2009.14794) cited in the context of attention mechanisms in large language models.
referenceThe paper 'A survey on data contamination for large language models' is an arXiv preprint, identified as arXiv:2502.14425.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org 1 fact
referenceGao et al. (2023) published 'Retrieval-augmented generation for large language models: A survey' in arXiv preprint arXiv:2312.10997, providing a survey on RAG techniques for LLMs.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org 1 fact
referenceThe paper 'Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning' was published as an arXiv preprint in 2025.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
referenceZhang et al. (2023) authored the paper titled 'Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models', published as arXiv:2309.01219.