entity

International Conference on Learning Representations

Also known as: The Thirteenth International Conference on Learning Representations, ICLR, The Twelfth International Conference on Learning Representations

Facts (15)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 9 facts
referenceThe paper 'Bayesian weaks-to-strong from text classification to generation' was presented at The Thirteenth International Conference on Learning Representations.
referenceThe paper 'SMT: fine-tuning large language models with sparse matrices' (The Thirteenth International Conference on Learning Representations) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding fine-tuning.
referenceThe paper 'LoRA: low-rank adaptation of large language models' was published in the International Conference on Learning Representations.
referenceThe paper 'Selective induction heads: how transformers select causal structures in context' was presented at The Thirteenth International Conference on Learning Representations.
referenceThe research paper 'Rethinking data mixture for large language models: a comprehensive survey and new perspectives' was published in The Thirteenth International Conference on Learning Representations and cited in section 4.2.2 of the survey.
referenceThe paper 'Dspy: compiling declarative language model calls into state-of-the-art pipelines' was published in The Twelfth International Conference on Learning Representations and is cited in section 6.2.1 of 'A Survey on the Theory and Mechanism of Large Language Models'.
referenceThe paper 'Weak to strong generalization for large language models with multi-capabilities' is published in The Thirteenth International Conference on Learning Representations and is cited in section 7.2.1 of 'A Survey on the Theory and Mechanism of Large Language Models'.
referenceThe research paper 'Best practices and lessons learned on synthetic data' was published in The Thirteenth International Conference on Learning Representations and cited in section 2.2.1 of the survey.
referenceThe paper 'Understanding warmup-stable-decay learning rates: A river valley loss landscape view' was presented at The Thirteenth International Conference on Learning Representations.
Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org arXiv Jun 14, 2024 1 fact
referenceThe paper 'Mitigating hallucination in large multi-modal models via robust instruction tuning' by Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang, published in The Twelfth International Conference on Learning Representations in 2023, proposes a method for reducing hallucinations in large multi-modal models using robust instruction tuning.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
measurementThe authors of 'A Survey of Incorporating Psychological Theories in LLMs' surveyed 175 papers from major computational linguistics venues (ACL Anthology), COLING, NeurIPS, ICML, ICLR, and influential arXiv preprints published between late 2021 and early 2025.
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org arXiv 1 fact
referenceThe paper 'NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs' by M. Galkin, E. Denis, J. Wu, and W.L. Hamilton, published in the International Conference on Learning Representations in 2022, introduces compositional and parameter-efficient representations for large knowledge graphs.
Consciousness in Artificial Intelligence? A Framework for Classifying ... arxiv.org arXiv Nov 20, 2025 1 fact
claimThe concept of learning representations is central to deep learning and current AI models, as evidenced by the existence of the International Conference on Learning Representations (ICLR).
Empowering GraphRAG with Knowledge Filtering and Integration arxiv.org arXiv Mar 18, 2025 1 fact
referenceSun et al. authored 'Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph', published in The Twelfth International Conference on Learning Representations.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 1 fact
referenceChrysos et al. identified quantifying uncertainty and hallucination in foundation models as the next frontier in reliable AI in their 2025 ICLR workshop proposal.