LLM outputs
Also known as: LLM-generated outputs, LLM output
Facts (10)
Sources
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org Jul 1, 2025 6 facts
claimPrompts function as semiotic acts that act as structuring interventions, framing the interpretive conditions of an LLM's output.
claimMeaning in LLM outputs is not finalized at the moment of generation but continues to evolve as users evaluate, revise, or contextualize the output, similar to how traditional texts gain new layers of significance through reinterpretation by different interpretive communities.
claimLLM outputs function as open invitations to interpretation that require active hermeneutic labor from the reader, as they do not encode fixed meanings.
claimFrom a Peircean standpoint, LLM outputs function as representamens, which are signs that do not point to an object through lived experience but elicit interpretants in human users, generating meaning through contextual interpretation.
claimEchoing Umberto Eco’s theory of the open work, LLM outputs invite multiple readings and rely on the cooperative labor of the user to actualize their significance.
perspectiveLLM-generated outputs function as openings rather than endpoints, requiring user engagement to revise or extend the material, which reinforces the concept that meaning arises through interaction.
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io 2 facts
procedureThe GraphEval framework constructs a Knowledge Graph from LLM output through a four-step pipeline: (1) processing input text, (2) detecting unique entities, (3) performing coreference resolution to retain only specific references, and (4) extracting relations to form triples of (entity1, relation, entity2).
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.
Medical Hallucination in Foundation Models and Their ... medrxiv.org Mar 3, 2025 1 fact
claimThe primary challenge in medical annotation of LLM outputs is the nuanced distinction between bona fide medical hallucinations and less critical errors, such as temporal discrepancies in patient timelines.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 1 fact
referenceSundararajan et al. (2017) proposed Integrated Gradients, an axiomatic attribution method that assigns a contribution score to each input feature for token-level importance analysis in neural NLP and LLM outputs.