Association for Computational Linguistics
Also known as: ACL
Facts (57)
Sources
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org 17 facts
referenceThe paper 'SocialBench: Sociality evaluation of role-playing conversational agents' by Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.) was published in the 'Findings of the Association for Computational Linguistics: ACL 2024' in Bangkok, Thailand, in August 2024.
referenceThe paper 'Marked personas: Using natural language prompts to measure stereotypes in language models' by Myra Cheng, Esin Durmus, and Dan Jurafsky was published in the 'Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics' in Toronto, Canada, in July 2023.
referenceAntonio Laverghetta Jr. and John Licato published 'Developmental negation processing in transformer language models' in the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics in May 2022.
referenceMa et al. (2023) explored the landscape of situated theory of mind in large language models in their paper 'Towards a holistic landscape of situated theory of mind in large language models', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceDaliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar authored 'Large language models with controllable working memory', published in the Findings of the Association for Computational Linguistics: ACL 2023.
referenceLuong et al. (2024) presented a method for the realistic evaluation of toxicity in large language models in their paper 'Realistic evaluation of toxicity in large language models', published in the Findings of the Association for Computational Linguistics: ACL 2024.
referenceThe paper 'Transformer working memory enables regular language reasoning and natural language length extrapolation' by Ta-Chung Chi, Ting-Han Fan, Alexander Rudnicky, and Peter Ramadge was published in the 'Findings of the Association for Computational Linguistics: EMNLP 2023'.
referenceThe paper 'Temporal knowledge question answering via abstract reasoning induction' by Ziyang Chen, Dongfang Li, Xiang Zhao, Baotian Hu, and Min Zhang was published in the 'Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics' in Bangkok, Thailand, in August 2024.
referenceXin Miao, Yongqi Li, Shen Zhou, and Tieyun Qian proposed a neuromorphic mechanism for episodic memory retrieval in large language models to generate commonsense counterfactuals for relation extraction, as detailed in their 2024 paper in the Findings of the Association for Computational Linguistics: ACL 2024.
referenceHwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Gunhee Kim, and Jung-woo Ha published 'KoSBI: A dataset for mitigating social bias risks towards safer large language model applications' in the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics in July 2023.
referenceTongyao Zhu et al. (2024) authored 'Beyond memorization: The challenge of random memory access in language models', published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, which discusses the difficulties language models face regarding random memory access.
referencePhilipp Mondorf and Barbara Plank authored 'Comparing inferential strategies of humans and large language models in deductive reasoning', published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics in 2024.
referenceNick McKenna, Tianyi Li, Liang Cheng, Mohammad Hosseini, Mark Johnson, and Mark Steedman investigated the sources of hallucinations in large language models specifically during inference tasks in their 2023 paper published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceYanhong Li, Chenghao Yang, and Allyson Ettinger authored 'When hindsight is not 20/20: Testing limits on reflective thinking in large language models', published in the Findings of the Association for Computational Linguistics: NAACL 2024.
referenceMaharaj et al. (2023) developed a model for hallucination detection in large language models by modeling gaze behavior in their paper 'Eyes show the way: Modelling gaze behaviour for hallucination detection', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
referenceThe paper 'ToMBench: Benchmarking theory of mind in large language models' by Zhuang Chen, Jincenzi Wu, Jinfeng Zhou, Bosi Wen, Guanqun Bi, Gongyao Jiang, Yaru Cao, Mengting Hu, Yunghwei Lai, Zexuan Xiong, and Minlie Huang was published in the 'Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics' in Bangkok, Thailand, in August 2024.
referenceEun-Kyoung Rosa Lee, Sathvik Nair, and Naomi Feldman published 'A psycholinguistic evaluation of language modelsβ sensitivity to argument roles' in the Findings of the Association for Computational Linguistics: EMNLP 2024 in November 2024.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 14 facts
referenceThe paper 'Towards reward fairness in rlhf: from a resource allocation perspective' was published in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3247β3259.
referenceThe paper 'ScaleBiO: scalable bilevel optimization for LLM data reweighting' was published in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 31959β31982, edited by W. Che, J. Nabende, E. Shutova, and M. T. Pilehvar, in Vienna, Austria (ISBN 979-8-89176-251-0).
referenceThe paper 'Language models resist alignment: evidence from data compression' was published in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23411β23432.
referenceThe paper 'Unveiling the spectrum of data contamination in language model: a survey from detection to remediation' was published in Findings of the Association for Computational Linguistics: ACL 2024.
referenceThe paper 'Defending large language models against jailbreaking attacks through goal prioritization' was published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers, pp. 8865β8887).
referenceThe paper 'Better synthetic data by retrieving and transforming existing datasets' was published in the Findings of the Association for Computational Linguistics ACL 2024, pp. 6453β6466.
referenceThe paper 'Revisiting jailbreaking for large language models: a representation engineering perspective' was published in the Proceedings of the 31st International Conference on Computational Linguistics, pp. 3158β3178.
referenceThe research paper 'Decoupled weight decay regularization' was published in the Findings of the Association for Computational Linguistics ACL 2024, pp. 11065β11082, and cited in section 2.3.1 of the survey.
referenceThe paper 'Investigating data contamination in modern benchmarks for large language models' was published in the Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 8698β8711.
referenceThe paper 'Attention tracker: detecting prompt injection attacks in llms' was published in the Findings of the Association for Computational Linguistics: NAACL 2025, pages 2309β2322.
referenceThe research paper 'Muon is scalable for LLM training' was published in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 17177β17197, and cited in section 7.2.2 of the survey.
referenceThe research paper 'Scaling laws for fact memorization of large language models' was published in the Findings of the Association for Computational Linguistics: EMNLP 2024, edited by Y. Al-Onaizan, M. Bansal, and Y. Chen, in Miami, Florida, USA, pp. 11263β11282, and cited in section 2.2.3 of the survey.
referenceThe paper 'SoftDedup: an efficient data reweighting method for speeding up language model pre-training' was published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) in Bangkok, Thailand, pages 4011β4022.
referenceThe paper 'Competition-level problems are effective llm evaluators' was published in the Findings of the Association for Computational Linguistics: ACL 2024, pages 13526β13544.
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org 4 facts
referenceJ. Yu, J. Jiang, L. Yang, and R. Xia proposed a method for improving Multimodal Named Entity Recognition via Entity Span Detection with Unified Multimodal Transformer, published in the Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
referenceC. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. Bethard, and D. McClosky authored 'The Stanford CoreNLP Natural Language Processing Toolkit,' which was published in the proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics in 2014.
referenceM. Li et al. created GAIA, a fine-grained multimedia knowledge extraction system, presented at the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
referenceT. Blevins and L. Zettlemoyer published 'Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders' in the Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics in 2020.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org Feb 23, 2026 4 facts
referenceThe paper 'FEVER: a large-scale dataset for fact extraction and VERification' was published in the Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).
referenceThe paper 'MTEB: massive text embedding benchmark' was published in the Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics in Dubrovnik, Croatia, pp. 2014β2037.
referenceThe paper 'The web as a knowledge-base for answering complex questions' was published in the Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) in New Orleans, Louisiana, pp. 641β651.
referenceThe paper 'TruthfulQA: measuring how models mimic human falsehoods' was published in the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) in Dublin, Ireland, pp. 3214β3252.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 3 facts
referenceZ. Jiang, F. F. Xu, J. Araki, and G. Neubig published 'How can we know what language models know?' in the Transactions of the Association for Computational Linguistics in 2020.
referenceR. Liao, X. Jia, Y. Li, Y. Ma, and V. Tresp published 'Gentkg: generative forecasting on temporal knowledge graph with large language models' in the Findings of the Association for Computational Linguistics: NAACL 2024.
referenceThe paper 'Pretrain-kge: learning knowledge representation from pretrained language models' was published in the Findings of the Association for Computational Linguistics: EMNLP 2020.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com 2 facts
referenceThe paper 'The Value of Semantic Parse Labeling for Knowledge Base Question Answering' was published at ACL in 2016, utilizes the WebQSP dataset, and is categorized under KBQA and KGQA.
referenceThe paper 'FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models' was published at ACL in 2024, utilizes the FanOutQA dataset, and is categorized under Multi-hop QA.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org Feb 16, 2025 2 facts
referenceMaria Leonor Pacheco and Dan Goldwasser developed a method for modeling content and context using deep relational learning, published in the Transactions of the Association for Computational Linguistics in 2021.
referenceClaudio Pinhanez et al. developed a method to improve intent recognition in conversational systems by using meta-knowledge mined from identifiers, published in the 2021 Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 2 facts
referenceGao et al. (2023) authored 'Rarr: Researching and revising what language models say, using language models', published in the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
referenceZiems et al. (2022) authored the paper titled 'The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems', published in the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org Aug 13, 2025 2 facts
referenceThe paper 'Shifting attention to relevance: Towards the predictive uncertainty quantification of free-form large language models' explores methods for quantifying predictive uncertainty in large language models, published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics in 2024.
referenceThe paper 'FaithDial: A faithful benchmark for information-seeking dialogue' by Dziri et al. (2022) introduces a benchmark designed to evaluate the faithfulness of information-seeking dialogue systems, published in the Transactions of the Association for Computational Linguistics.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org Oct 23, 2025 2 facts
referenceLiu Pai, Wenyang Gao, Wenjie Dong, Lin Ai, Ziwei Gong, Songfang Huang, Li Zongsheng, Ehsan Hoque, Julia Hirschberg, and Yue Zhang published 'A survey on open information extraction from rule-based model to large language model' in the Findings of the Association for Computational Linguistics: EMNLP 2024.
referenceLilong Xue, Dan Zhang, Yuxiao Dong, and Jie Tang developed AutoRE, a system for document-level relation extraction using large language models, as published in the 2024 Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.
Knowledge Enhanced Industrial Question-Answering Using Large ... engineering.org.cn 1 fact
referenceY. Kim published the paper 'Convolutional neural networks for sentence classification' in the Association for Computational Linguistics proceedings in 2014, pages 1746-1751, in Doha, Qatar.
A Mixed-Methods Study of Open-Source Software Maintainers On ... arxiv.org Feb 3, 2025 1 fact
referenceDaoguang Zan, Bei Chen, Fengji Zhang, Dianjie Lu, Bingchao Wu, Bei Guan, Yongji Wang, and Jian-Guang Lou authored the survey 'Large language models meet nl2code: A survey,' published in the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) in 2023.
A Comprehensive Benchmark and Evaluation Framework for Multi ... arxiv.org Jan 6, 2026 1 fact
referenceEeyore is a system for realistic depression simulation that utilizes expert-in-the-loop supervised and preference optimization, presented by Siyang Liu et al. in the Findings of the Association for Computational Linguistics: ACL 2025.
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Dec 2, 2025 1 fact
referenceThe paper 'Mvp-tuning: Multi-view knowledge retrieval with prompt tuning for commonsense reasoning' by Huang Y, Li Y, Xu Y, et al. was published in the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics in 2023.
Understanding LLM Understanding skywritingspress.ca Jun 14, 2024 1 fact
referenceTenney, Ian, Dipanjan Das, and Ellie Pavlick. βBERT Rediscovers the Classical NLP Pipeline.β Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf