concept

text generation

Facts (17)

Sources
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 3 facts
referenceP. Ke, H. Ji, Y. Ran, X. Cui, L. Wang, L. Song, et al. published 'Jointgt: Graph-text joint representation learning for text generation from knowledge graphs' as an arXiv preprint in 2021.
referenceGPT-NER (Wang S. et al., 2023) improves Named Entity Recognition by converting the sequence labeling task into a text generation task using special markers to identify entities.
referenceDecoder-only models, such as GPT, OPT, and LLaMA, utilize unidirectional attention and auto-regressive token prediction to excel in text generation tasks like chatbots, text summarization, and code generation.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 2 facts
claimThe observed inter-rater reliability in the study was moderate, but sufficient to support the identification of systematic biases and error modalities within the clinical reasoning and text generation capabilities of the language models.
claimFlan-PaLM models show high performance in medical benchmarks, demonstrating the potential of targeted pre-training for complex medical reasoning and text generation.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 2 facts
measurementOpenAI's GPT-3 model contains 175 billion parameters and is known for high-quality text generation, translation, question answering, and summarization.
claimLarge language models have achieved milestones in NLP tasks including text generation, machine translation, sentiment analysis, and conversation AI.
What Really Causes Hallucinations in LLMs? - AI Exploration Journey aiexpjourney.substack.com AI Innovations and Insights Sep 12, 2025 1 fact
claimLarge language model hallucinations are statistically inevitable if text generation is treated as a binary classification problem of determining whether a continuation is valid, because every classifier makes errors that propagate to the generator.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature May 13, 2025 1 fact
referenceThe BERTScore metric, detailed in 'BERTScore: Evaluating Text Generation with BERT' (arXiv:1904.09675, 2020), utilizes BERT embeddings to evaluate text generation quality.
Unknown source 1 fact
claimBLEU, ROUGE, and METEOR are traditional automatic metrics used for evaluating text generation.
Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com M. Brenndoerfer · mbrenndoerfer.com Mar 15, 2026 1 fact
claimLarge language models can be extremely fluent about topics they lack factual knowledge of because fluency is a learned property of text generation rather than a property of factual recall.
Applying Large Language Models in Knowledge Graph-based ... arxiv.org Benedikt Reitemeyer, Hans-Georg Fill · arXiv Jan 7, 2025 1 fact
claimLarge Language Models (LLMs) increase the accessibility of Artificial Intelligence experimentation by allowing users to trigger text or image generation through natural language prompts.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
referenceSellam, Das, and Parikh (2020) introduced 'BLEURT', a metric designed for learning robust evaluations of text generation.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 1 fact
claimQuestion answering (QA) is a fundamental component in artificial intelligence, natural language processing, information retrieval, and data management, with applications including text generation, chatbots, dialog generation, web search, entity linking, natural language query, and fact-checking.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv Feb 23, 2026 1 fact
referenceThe paper 'FActScore: fine-grained atomic evaluation of factual precision in long form text generation' was published in the Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore, pp. 12076–12100.
Why Large Language Models Hallucinate - YouTube youtube.com YouTube Apr 20, 2023 1 fact
claimLarge Language Models do not hallucinate in the traditional sense; they function by generating text that adheres to spelling and grammar rules, treating sensible and nonsensical outputs identically.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org arXiv Aug 13, 2025 1 fact
referenceThe paper 'Bertscore: Evaluating text generation with BERT' by Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi was published in the 8th International Conference on Learning Representations (ICLR 2020).