Relations (1)
cross_type 0.40 — supporting 4 facts
Large Language Models are the primary subject of research papers presented at the International Conference on Machine Learning, as evidenced by the publication of studies on subspace optimization [1], reinforcement learning [2], low-rank adaptation [3], and alignment limitations [4] within the conference proceedings.
Facts (4)
Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org 4 facts
referenceThe paper 'Fundamental limitations of alignment in large language models' exists as an arXiv preprint (arXiv:2304.11082) and was also published in the Proceedings of the 41st International Conference on Machine Learning, pages 53079–53112.
referenceThe research paper 'ProRL: prolonged reinforcement learning expands reasoning boundaries in large language models' was published in the International Conference on Machine Learning, pp. 4051–4060, and cited in section 7.2.2 of the survey.
referenceThe paper 'Subspace optimization for large language models with convergence guarantees' was published in the Proceedings of the 42nd International Conference on Machine Learning, Volume 267, pages 22468–22522.
referenceThe research paper 'On the optimization landscape of low rank adaptation methods for large language models' was published in the International Conference on Machine Learning, pp. 32100–32121, and cited in section 4.2.2 of the survey.