Relations (1)

cross_type 2.81 — strongly supporting 6 facts

Anil Seth provides a critical framework for evaluating language models, arguing that while they lack consciousness, they create a cognitively impenetrable illusion of it [1]. He further explores their potential for 'understanding' [2], compares their cognitive space to human minds [3], and critiques the discourse surrounding their moral welfare and nature {fact:5, fact:6}.

Facts (6)

Sources
AI Sessions #9: The Case Against AI Consciousness (with Anil Seth) conspicuouscognition.com Conspicuous Cognition 6 facts
perspectiveAnil Seth suggests that language models, particularly those embodied in a world and trained while embodied, could potentially be described as 'understanding' things, even if they lack consciousness.
perspectiveAnil Seth criticizes the term 'stochastic parrots' as reductive, arguing that it is unfair to AI, unfair to actual parrots, and diminishes the human condition by implying that human cognition is fundamentally the same as that of a language model.
perspectiveAnil Seth posits that language models are exploring a different region in the space of possible minds compared to humans, meaning they may soon outperform humans in many tasks while remaining fundamentally different.
perspectiveAnil Seth argues that calls for AI welfare are dangerous because they reinforce the illusion of AI consciousness, particularly when major technology companies express concern for the moral welfare of their language models.
perspectiveAnil Seth believes that the criteria for a language model to achieve true understanding are more achievable through current technological trajectories than the criteria for achieving consciousness.
perspectiveAnil Seth asserts that AI is not conscious, but notes that interacting with language models creates a cognitively impenetrable illusion of consciousness, similar to visual illusions where known facts do not override perception.