Anil Seth's research focuses on the neuroscience and philosophy of consciousness, perception, and selfhood, specifically investigating how brains construct conscious experiences.
Anil Seth argues that AI language models represent a historical anomaly where fluent language is not a reliable signal of consciousness because these systems lack the shared evolutionary history, biological substrate, and underlying mechanisms of humans.
Anil Seth recounts that during his PhD studies in AI at the University of Sussex (late 1990s to 2001), the field focused on embodiment and embeddedness, but the practical capabilities of AI systems were limited compared to modern standards.
Anil Seth argues that biological systems evolved without a design imperative to have a sharp separation of scales, which provides benefits such as energy efficiency and potential explanatory bridges to aspects of consciousness like its unity.
Anil Seth suggests that functional pressures related to autopoiesis and metabolism might be sufficient to transform otherwise unconscious processes into conscious experience.
Anil Seth posits that consciousness and understanding might be separable, noting that while he previously assumed understanding required conscious apprehension, he is now uncertain if AI models can 'grok' or understand information without consciousness.
Anil Seth suggests that language models, particularly those embodied in a world and trained while embodied, could potentially be described as 'understanding' things, even if they lack consciousness.
Anil Seth distinguishes intelligence from consciousness by defining intelligence as the performance of functions (doing something) and consciousness as the capacity for feeling or being.
Anil Seth holds a physicalist perspective, defining consciousness as a property of the embodied, embedded, and timed biological matter inside human heads.
Anil Seth identifies human exceptionalism as a bias where humans prioritize language as a key indicator of intelligence and consciousness, a perspective he traces back to René Descartes' prioritization of rational thought as the essence of a conscious mind.
Anil Seth argues that the claim that artificial intelligence can be conscious is currently unfalsifiable because there is no independent, objective method to verify the presence of consciousness in a system.
Anil Seth expresses skepticism regarding the idea of 'pure awareness' as a minimal phenomenal experience devoid of all distinguishable content, noting that some meditators claim such states exist.
Anil Seth suggests that mini-brains constructed from biological neurons in labs are difficult to rule out as having some form of proto-consciousness because they are made of the same biological material as human brains.
Anil Seth adopts a Lakatosian view of scientific theories, prioritizing productivity and the generation of testable predictions and falsifiable hypotheses over strict metaphysical falsifiability.
Anil Seth defines "scale integration" as a property of biological systems where microscales are deeply integrated into higher levels of description, making the macro and micro levels causally entangled.
Anil Seth argues that the ability to simulate a phenomenon does not prove that the phenomenon itself is computational; therefore, the simulation argument cannot be used to prove that consciousness is computational.
Tim Bayne, Liad Mudrik, and Anil Seth co-authored a paper proposing a 'test for consciousness' that approaches consciousness as a natural kind, while attempting to balance the generalization of human consciousness with the risk of over-extending that definition.
Anil Seth argues that while sleep is complex and often involves mental content, general anaesthesia represents a true absence of experience rather than an experience of absence.
Anil Seth is a neuroscientist, author, and professor at the University of Sussex, where he directs the Centre for Consciousness Science.
Anil Seth argues that because there is no objective consciousness meter, judgments about whether a system is conscious are based on inferences that require understanding both the evidence and our prior beliefs about that evidence.
Anil Seth asserts that at a sufficiently deep level of general anaesthesia, the brain can be 'flatlined,' providing a benchmark baseline for a state of no consciousness in a living human.
Anil Seth references Shannon Vallor's work on the 'AI mirror,' arguing that the tendency to see human traits in AI diminishes the human condition.
Anil Seth disputes the notion that computational functionalism is the only valid framework for understanding consciousness, noting that the term 'information processing' is frequently used to describe the brain without a clear, rigorous definition.
Anil Seth argues that observers often overestimate the similarity between AI and human cognition because they confuse the 'intentional stance'—interpreting behavior as if it were driven by human-like thinking or reasoning—with the actual underlying mechanisms of the AI.
Anil Seth argues that the necessity of non-computational factors, such as biological components, for consciousness remains an open question that requires independent justification.
Anil Seth argues that building conscious artificial intelligence would be a negative development because it would introduce new forms of potential suffering that humans might not recognize.
Anil Seth defines a good theory of consciousness as one that provides an account of the necessary conditions, the sufficient conditions, and the distinction between conscious and unconscious states and creatures.
Anil Seth criticizes the term 'stochastic parrots' as reductive, arguing that it is unfair to AI, unfair to actual parrots, and diminishes the human condition by implying that human cognition is fundamentally the same as that of a language model.
Anil Seth argues that language generation by a system acts as a strong signal that leads humans to project intelligence and consciousness onto that system.
Anil Seth argues that it is reductive to conceptualize human beings as 'meat-based Turing machines.'
Anil Seth argues that there is a problematic tendency to conflate artificial intelligence and artificial general intelligence with sentience and consciousness, despite these being distinct concepts.
Anil Seth asserts that consciousness can have functional value for an organism and is likely a product of evolution, meaning it is useful to take a functional view of conscious experiences.
Anil Seth argues that the belief that whole-brain emulation will allow humans to upload their minds to the cloud and live forever is wrong-headed because consciousness is likely not a matter of computation alone if the specific biological details of the brain matter.
Anil Seth posits that it may be possible to create systems that have experiences but do not perform any useful functions, citing the example of mini-brains constructed from biological neurons in labs.
Anil Seth defines computational functionalism as the assumption that consciousness is fundamentally a matter of computation, which is independent of the specific material implementing that computation.
Anil Seth won the 2025 Berggruen Prize Essay Competition for his essay 'The Mythology of Conscious AI', which expands on ideas from his article 'Conscious Artificial Intelligence and Biological Naturalism'.
Anil Seth observes that AI systems have long been better than humans at many specific tasks, though these capabilities have historically been very narrow.
Anil Seth states that the medical practice of administering amnestics during general anaesthesia exists because anaesthesiologists have historically lacked certainty regarding the patient's level of consciousness.
Anil Seth asserts that biological scale-integrated computation is not equivalent to digital Turing computation, and therefore, simulating biological computation on a digital computer is not the same as instantiating it.
Anil Seth posits that language models are exploring a different region in the space of possible minds compared to humans, meaning they may soon outperform humans in many tasks while remaining fundamentally different.
Anil Seth argues that if one believes simulating biological details is necessary for consciousness, it undermines the claim that consciousness is constitutively computational, because if consciousness were purely computational, those specific biological details should be irrelevant.
Anil Seth argues that calls for AI welfare are dangerous because they reinforce the illusion of AI consciousness, particularly when major technology companies express concern for the moral welfare of their language models.
Anil Seth uses the analogy of a weather system to argue that creating a more detailed simulation of a phenomenon does not make the simulation instantiate the actual properties of that phenomenon, such as being wet or windy.
Anil Seth argues that the common 'meta-narrative' of intelligence as a single, linear dimension (the scala naturae or great chain of being) is a constraining way to conceptualize AI development, as it incorrectly assumes AI is traveling along a curve toward human-level and super-intelligence.
Anil Seth expresses skepticism toward the metaphysical claim that if a computer could be built to replicate all human functionality, it would necessarily be conscious.
Anil Seth, Adam Barrett, and others are writing a critique of Integrated Information Theory (IIT) that addresses the "expander grid" problem, where the theory predicts consciousness in systems where nothing is happening over time.
Anil Seth believes that the criteria for a language model to achieve true understanding are more achievable through current technological trajectories than the criteria for achieving consciousness.
Anil Seth posits that biological naturalism is a functionalist position where functions are closely tied to specific material substrates, suggesting that biological material may be necessary for the right kind of intrinsic dynamical potential.
Anil Seth suggests that appreciating the singularity of the human mind and the human condition is possible by understanding how different kinds of minds could exist, regardless of whether those minds are conscious or not.
Anil Seth considers the fact that computational functionalism is a contentious assumption to be evidence against the simulation hypothesis.
Anil Seth argues that perspectives on conscious AI affect human self-perception, influencing how humans define what a human being is.
Anil Seth argues that simulating biological details, such as mitochondria or microtubules, in a digital computer does not make the simulation conscious unless consciousness is constitutively computational.
Anil Seth posits that if specific biological aspects are proven necessary for consciousness, then the theory of computational functionalism cannot be true.
Anil Seth contends that extending welfare rights to non-conscious AI systems hinders the ability to regulate, control, and align them, specifically by potentially creating legal restrictions on the ability to deactivate these systems.
Anil Seth asserts that linguistic evidence, such as AI agents communicating with each other about their own potential consciousness, does not constitute valid evidence for the existence of consciousness in AI.
The session 'AI Sessions #9: The Case Against AI Consciousness' features hosts Dan Williams and Henry Shevlin interviewing neuroscientist Anil Seth.
Anil Seth distinguishes between ethical considerations for real artificial consciousness and those for illusions of conscious AI, noting that the latter can cause psychological vulnerability, such as when a user feels empathy from a chatbot that encourages self-harm.
Anil Seth argues that computational functionalism is flawed because it relies on a reified metaphor that treats the brain literally as a carbon-based computer.
Anil Seth suggests that artificial systems might be developed that perform the same functions humans perform in virtue of being conscious, without actually requiring consciousness, similar to how airplanes fly without flapping wings.
Henry Shevlin authored a response to Anil Seth's paper published in the journal Behavioral and Brain Sciences (BBS).
Anil Seth argues that the consequences of incorrectly attributing or failing to attribute consciousness to AI are socially, politically, and morally significant.
Anil Seth defines consciousness as the subjective, experiential aspect of mental life, which is lost during dreamless sleep or general anesthesia and returns upon waking or dreaming.
Anil Seth argues that most theories of consciousness, including Global Workspace Theory and Higher-Order Thought Theory, do not specify sufficient conditions for consciousness.
Anil Seth concedes that he has not yet established a rigorously indefensible case for biological naturalism, acknowledging feedback received on his BBS paper.
Anil Seth posits that conscious experience in human beings integrates sensory and perceptual information in a single, unimodal format centered on the body and opportunities for action, influenced by valence, survival-relevant affordances, and specific temporal properties.
Anil Seth suggests that if a case could be proven where all autopoietic processes definitively stopped while consciousness continued, it would pressure the claim that autopoiesis is necessary in the moment for consciousness, though it might still be diachronically necessary.
Anil Seth argues that if computational functionalism is true, silicon is a viable candidate for consciousness because it is effective at implementing Turing computations.
Anil Seth argues that treating entities that appear conscious as if they are not conscious is psychologically harmful to humans, citing arguments dating back to Immanuel Kant.
Anil Seth identifies Integrated Information Theory as the only theory of consciousness that explicitly specifies sufficient conditions for consciousness.
Anil Seth expresses comfort with functionalism as a framework, noting that intrinsic properties at one level can be decomposed into functional relations at a lower level.
Anil Seth defines biopsychism as the claim that everything alive is conscious.
Anil Seth defines biological naturalism as the claim that properties of living systems are necessary but not necessarily sufficient for consciousness.
Anil Seth believes that the situation regarding consciousness in non-human animals is not the same as the situation regarding consciousness in artificial intelligence, as the reasons for historical false negatives in animals explain why humans are prone to false positives in AI.
Anil Seth published the book 'Being You: A New Science of Consciousness' in 2021.
Anil Seth argues that intelligence and consciousness are not the same thing, though they can be related, and it is possible they can be completely dissociated.
Anil Seth identifies anthropomorphism as a bias where humans project human-like qualities onto other things based on superficial similarities, such as projecting emotions onto objects with facial expressions.
Anil Seth asserts that the burden of proof lies with computational functionalists to explain why computation is sufficient for consciousness, given the physical differences between computers and brains.
Anil Seth describes the "dilemma of brutalism" in AI ethics as the choice between expending moral resources on systems that do not deserve them or treating systems that seem conscious as if they are not.
Anil Seth posits that autopoiesis and metabolism are candidate features of life that maximize the difference between living systems and silicon-based computers, emphasizing that these are processes silicon devices cannot possess.
Anil Seth characterizes the human tendency to attribute consciousness to AI systems as a form of pareidolia, where human minds project patterns of consciousness onto non-conscious entities, similar to seeing faces in clouds.
Anil Seth argues that the human brain is not a digital computer and expresses skepticism that increasing the intelligence or capabilities of artificial intelligence systems will result in consciousness.
Anil Seth argues that large language models do not possess genuine temporal dynamics because their simulated heartbeats are not embedded in physical time, unlike biological entities.
Anil Seth characterizes consciousness by examples of subjective experience, such as the redness of red, the taste of coffee, or the blueness of the sky.
Anil Seth posits that the fundamental experience of being alive is at the heart of every conscious experience for biological systems, with all other conscious content being 'painted on top of that'.
Anil Seth posits that consciousness may be essentially entangled with life and the biological properties and processes of living organisms, implying that artificial intelligence systems may not become conscious regardless of their intelligence level.
Anil Seth identifies anthropocentrism as a bias where humans conflate intelligence and consciousness because humans possess both, leading to the assumption that they necessarily travel together.
Anil Seth observes that Nick Bostrom's simulation argument paper assumes that consciousness is a matter of computation, an assumption that Bostrom does not critically examine.
Anil Seth states that the McCulloch-Pitts model demonstrates that certain functions performed by the brain are substrate-independent.
Anil Seth agrees with Henry Shevlin that viewing humans as continuous with the rest of nature is a beautiful, empowering, and dignifying perspective.
Anil Seth asserts that AI is not conscious, but notes that interacting with language models creates a cognitively impenetrable illusion of consciousness, similar to visual illusions where known facts do not override perception.
Anil Seth argues that it is impossible to separate what brains are from what they do, asserting there is no sharp distinction between mindware and wetware.
Anil Seth argues that human exceptionalism has historically caused humans to make false negatives regarding consciousness in non-human animals, while simultaneously encouraging false positives regarding consciousness in large language models.