fairness
Facts (19)
Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 8 facts
referenceThe paper 'Bias and fairness in large language models: a survey' provides a comprehensive overview of bias and fairness issues within large language models.
claimThe current landscape of large language models presents new challenges for defining and formalizing concepts like 'robustness', 'fairness', and 'privacy' compared to traditional machine learning, as noted by Chang et al. (2024), Anwar et al. (2024), Dominguez-Olmedo et al. (2025), and Hardt and Mendler-Dünner (2025).
claimTraditional machine learning literature extensively analyzed robustness (Muravev and Petiushko, 2021; Ruan et al., 2021), fairness (Kleinberg et al., 2016; Liu et al., 2019), and privacy (Li et al., 2017; Kairouz et al., 2015) because these concepts were well-defined and formalizable using precise mathematical objectives.
referenceThe paper 'Fairness in large language models: a taxonomic survey' was published in the ACM SIGKDD explorations newsletter 26 (1), pp. 34–48.
referenceThe paper 'Inherent trade-offs in the fair determination of risk scores' discusses trade-offs in the fairness of risk scoring systems, as detailed in arXiv preprint arXiv:1609.05807.
referenceThe paper 'A survey on fairness in large language models' is available as arXiv preprint arXiv:2308.10149.
referenceComprehensive surveys on LLM safety and trustworthiness include: safety (Shi et al., 2024), trustworthiness (Huang et al., 2024a; d; Liu et al., 2023c), fairness (Li et al., 2023b; Gallegos et al., 2024; Chu et al., 2024), and privacy (Yao et al., 2024b; Yan et al., 2024; Das et al., 2025).
claimIn the current landscape of Large Language Models, definitions of robustness, fairness, and privacy are often ambiguous and lack simple closed-form mathematical representations compared to traditional machine learning.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Dec 9, 2025 3 facts
claimMehrabi, Morstatter, Saxena, Lerman, and Galstyan published 'A survey on bias and fairness in machine learning' in ACM Computing Surveys in 2021.
claimSelbst, Boyd, Friedler, Venkatasubramanian, and Vertesi published 'Fairness and abstraction in sociotechnical systems' in the Proceedings of the Conference on Fairness, Accountability, and Transparency in 2019.
referenceGreco, G., Alberici, F., Palmonari, M., and Cosentini, A. developed a method for the declarative encoding of fairness within logic tensor networks, presented at ECAI 2023.
Papers - Dr Vaishak Belle vaishakbelle.github.io 2 facts
How Open-Source AI Drives Responsible Innovation - The Atlantic theatlantic.com 1 fact
claimThe development of AI systems raises concerns regarding safety, fairness, transparency, and accountability.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org Nov 7, 2024 1 fact
claimAI algorithms should incorporate societal moral requirements, including fairness, justice, privacy protection, prejudice and discrimination mitigation, environmental ethics, technological ethics, humanitarianism, and religious considerations into their evaluation criteria.
Understanding LLM Understanding skywritingspress.ca Jun 14, 2024 1 fact
referenceH. Cossette-Lefebvre and Jocelyn Maclure authored the paper 'AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making', published in AI and Ethics in 2023.
SSRN 3618442 | PDF | Behavioral Economics | Risk - Scribd scribd.com 1 fact
claimCognitive biases and heuristics, including fairness, framing, and anchoring, influence customer decision-making processes in the insurance industry.
25 Educational Benefits Of Play In Early Childhood Development klaschools.com 1 fact
claimWaiting for a turn in games or group activities teaches toddlers patience and fairness, helping them internalize social rules necessary for relationships and classroom participation.
Call for Papers: Special Session on KR and Machine Learning kr.org 1 fact
claimThe success of Machine Learning systems has highlighted issues like explainability, bias, and fairness, which encourages the integration of symbolic or interpretable representations into AI systems.