Relations (1)

related 2.58 — strongly supporting 5 facts

Large Language Models are conceptualized as semiotic machines that operate within the semiosphere, as evidenced by their training on textual corpora that represent a sampling of this environment [1]. Their outputs reflect the semiosphere's inherent polysemy and cultural tensions {fact:2, fact:5}, while prompts serve as mechanisms to engage with specific coordinates within this semiotic space [2], allowing for critical analysis of the models' embeddedness in these broader environments [3].

Facts (5)

Sources
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org arXiv 5 facts
claimPrompts act as semiotic catalysts for Large Language Models by triggering selective activation within the model's latent potentials and engaging with the semiosphere at specific coordinates.
perspectiveFrom a semiotic perspective, the linguistic variability in Large Language Model outputs illustrates the model's navigation within the semiosphere, exposing the stratified texture of cultural tensions and semiotic negotiations.
claimThe ability of LLMs to function as semiotic machines within the semiosphere is linked to the vastness and heterogeneity of the textual corpora used for training, which represent a partial and filtered sampling of the semiosphere.
claimA broader and more diverse training corpus increases the ability of Large Language Models to generate texts that reflect the polysemy, interconnections, and contradictions inherent in the semiosphere.
claimThe semiotic paradigm supports rigorous critical analysis of Large Language Models by foregrounding their embeddedness within broader semiotic environments (the semiosphere) and highlighting how cultural codes, ideological patterns, and user interventions shape outputs.