chatgpt
Facts (12)
Sources
EdinburghNLP/awesome-hallucination-detection - GitHub github.com 2 facts
referenceThe paper 'A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation' uses Precision and Recall metrics to detect sentence-level and concept-level hallucinations in ChatGPT-generated paragraphs spanning 150 topics.
claimFuzzyQA is a dataset based on HybridDialogue and MuSiQue where complex questions were simplified using ChatGPT.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org Jul 1, 2025 2 facts
claimChatGPT can recast Baruch Spinoza's 'Ethics' into the format of a TED Talk, presenting Spinoza's philosophy of one infinite, eternal substance (God or Nature) in a contemporary, accessible style.
referenceGoldstein and Levinstein's 2024 paper 'Does chatgpt have a mind?' investigates the philosophical question of whether ChatGPT possesses a mind.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org Aug 13, 2025 1 fact
accountThe researchers utilized AI assistants, such as ChatGPT, for coding, data analysis, and writing tasks, while ensuring all AI-generated outputs were reviewed and refined by the researchers to ensure accuracy and coherence.
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org 1 fact
claimCombining knowledge graphs with Large Language Models (LLMs) like ChatGPT improves factual correctness and explanations in question-answering, thereby promoting the quality and interpretability of AI decision-making.
Bridging the Gap Between LLMs and Evolving Medical Knowledge arxiv.org Jun 29, 2025 1 fact
referenceLiu et al. (2023) published 'Utility of chatgpt in clinical practice' in the Journal of Medical Internet Research, volume 25, article e48568.
Efficient Knowledge Graph Construction and Retrieval from ... - arXiv arxiv.org Aug 7, 2025 1 fact
accountThe authors used ChatGPT to assist in rephrasing sections of the paper for improved clarity, but all core content including research design, data analysis, and result interpretation was conducted without generative AI tools.
The Hallucinations Leaderboard, an Open Effort to Measure ... huggingface.co Jan 29, 2024 1 fact
measurementHaluEval includes 5,000 general user queries with ChatGPT responses and 30,000 task-specific examples across three tasks: question answering (HaluEval QA), knowledge-grounded dialogue (HaluEval Dialogue), and summarisation (HaluEval Summarisation).
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org May 10, 2024 1 fact
measurementIn a recent study, ChatGPT exhibited a hallucination rate of up to 31% when generating scientific abstracts.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 1 fact
referenceThe paper 'How close is chatgpt to human experts? comparison corpus, evaluation, and detection' (arXiv:2301.07597) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding LLM evaluation.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org 1 fact
referenceHaocong Rao, Cyril Leung, and Chunyan Miao authored 'Can ChatGPT assess human personalities? a general evaluation framework', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.