Relations (1)
cross_type 3.00 — strongly supporting 7 facts
Henry Shevlin is a researcher who actively publishes academic arguments regarding the nature and criteria of consciousness, specifically addressing its application to AI systems [1], [2], and [3]. He further explores the theoretical boundaries of consciousness by analyzing its relationship to computational functionalism [4], the specificity problem [5], and biological requirements like autopoiesis [6].
Facts (7)
Sources
AI Sessions #9: The Case Against AI Consciousness (with Anil Seth) conspicuouscognition.com 6 facts
perspectiveHenry Shevlin argues that for artificial intelligence, determining the necessary conditions for consciousness is more relevant than determining sufficient conditions, because ruling out consciousness in artificial intelligence systems clarifies the ethical situation.
claimHenry Shevlin notes that the classification of dreamless sleep and general anesthesia as examples of losing consciousness is contested in debates around consciousness.
claimHenry Shevlin argues that if consciousness is computational, it must be substrate-invariant, similar to how games like poker or chess, and money, remain the same regardless of the medium (coins, banknotes, or digital balance sheets).
claimHenry Shevlin defines the "specificity problem" as the difficulty of applying existing theories of consciousness to non-human systems because those theories are often too underspecified.
claimHenry Shevlin asserts that while computational functionalism is one path to concluding that AI can be conscious, there are other types of functionalism that also support this conclusion.
perspectiveHenry Shevlin argues that the case of Hisashi Ouchi challenges the necessity of autopoiesis for consciousness, suggesting that if consciousness persisted despite the cessation of autopoietic processes, it undermines the claim that autopoiesis is necessary for consciousness.
Consciousness and AI - Open Encyclopedia of Cognitive Science oecs.mit.edu 1 fact
claimHenry Shevlin (2021) argues that it is questionable whether evidence for neuroscientific theories of consciousness, which is largely derived from studies on humans and primates, supports their extension to AI systems, particularly because these studies do not specify how similar features must be to suffice for consciousness.