Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models and AI alignment are intrinsically linked as alignment is a critical stage in the lifecycle of these models, as noted in [1]. Furthermore, academic research explicitly investigates the alignment of these models, as evidenced by the papers titled 'Trustworthy llms: a survey and guideline for evaluating large language models’ alignment' {fact:1, fact:3} and 'Fundamental limitations of alignment in large language models' [2].

Facts (4)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 3 facts
referenceThe paper 'Fundamental limitations of alignment in large language models' exists as an arXiv preprint (arXiv:2304.11082) and was also published in the Proceedings of the 41st International Conference on Machine Learning, pages 53079–53112.
referenceThe research paper 'Trustworthy llms: a survey and guideline for evaluating large language models’ alignment' was published as an arXiv preprint (arXiv:2505.21598) and cited in section 2.2.1 of the survey.
claimThe survey titled 'A Survey on the Theory and Mechanism of Large Language Models' organizes the theoretical landscape of Large Language Models into a lifecycle-based taxonomy consisting of six stages: Data Preparation, Model Preparation, Training, Alignment, Inference, and Evaluation.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
referenceYang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li authored 'Trustworthy llms: a survey and guideline for evaluating large language models’ alignment', published as an arXiv preprint in 2023.