Relations (1)
cross_type 4.00 — strongly supporting 15 facts
Large Language Models are a primary research subject within the Association for Computational Linguistics, as evidenced by numerous papers published in their proceedings and findings, including studies on relation extraction [1], code generation [2], uncertainty quantification [3], knowledge graphs [4], and hallucination detection {fact:12, fact:13}.
Facts (15)
Sources
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org 9 facts
referenceMa et al. (2023) explored the landscape of situated theory of mind in large language models in their paper 'Towards a holistic landscape of situated theory of mind in large language models', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceDaliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar authored 'Large language models with controllable working memory', published in the Findings of the Association for Computational Linguistics: ACL 2023.
referenceLuong et al. (2024) presented a method for the realistic evaluation of toxicity in large language models in their paper 'Realistic evaluation of toxicity in large language models', published in the Findings of the Association for Computational Linguistics: ACL 2024.
referenceXin Miao, Yongqi Li, Shen Zhou, and Tieyun Qian proposed a neuromorphic mechanism for episodic memory retrieval in large language models to generate commonsense counterfactuals for relation extraction, as detailed in their 2024 paper in the Findings of the Association for Computational Linguistics: ACL 2024.
referencePhilipp Mondorf and Barbara Plank authored 'Comparing inferential strategies of humans and large language models in deductive reasoning', published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics in 2024.
referenceNick McKenna, Tianyi Li, Liang Cheng, Mohammad Hosseini, Mark Johnson, and Mark Steedman investigated the sources of hallucinations in large language models specifically during inference tasks in their 2023 paper published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceYanhong Li, Chenghao Yang, and Allyson Ettinger authored 'When hindsight is not 20/20: Testing limits on reflective thinking in large language models', published in the Findings of the Association for Computational Linguistics: NAACL 2024.
referenceMaharaj et al. (2023) developed a model for hallucination detection in large language models by modeling gaze behavior in their paper 'Eyes show the way: Modelling gaze behaviour for hallucination detection', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceThe paper 'ToMBench: Benchmarking theory of mind in large language models' by Zhuang Chen, Jincenzi Wu, Jinfeng Zhou, Bosi Wen, Guanqun Bi, Gongyao Jiang, Yaru Cao, Mengting Hu, Yunghwei Lai, Zexuan Xiong, and Minlie Huang was published in the 'Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics' in Bangkok, Thailand, in August 2024.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org 2 facts
referenceThe paper 'Revisiting jailbreaking for large language models: a representation engineering perspective' was published in the Proceedings of the 31st International Conference on Computational Linguistics, pp. 3158–3178.
referenceThe paper 'Investigating data contamination in modern benchmarks for large language models' was published in the Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 8698–8711.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org 1 fact
referenceThe paper 'Shifting attention to relevance: Towards the predictive uncertainty quantification of free-form large language models' explores methods for quantifying predictive uncertainty in large language models, published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics in 2024.
A Mixed-Methods Study of Open-Source Software Maintainers On ... arxiv.org 1 fact
referenceDaoguang Zan, Bei Chen, Fengji Zhang, Dianjie Lu, Bingchao Wu, Bei Guan, Yongji Wang, and Jian-Guang Lou authored the survey 'Large language models meet nl2code: A survey,' published in the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) in 2023.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 1 fact
referenceR. Liao, X. Jia, Y. Li, Y. Ma, and V. Tresp published 'Gentkg: generative forecasting on temporal knowledge graph with large language models' in the Findings of the Association for Computational Linguistics: NAACL 2024.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org 1 fact
referenceLilong Xue, Dan Zhang, Yuxiao Dong, and Jie Tang developed AutoRE, a system for document-level relation extraction using large language models, as published in the 2024 Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.