Relations (1)
cross_type 2.58 — strongly supporting 5 facts
Henry Shevlin is a researcher who extensively analyzes the ethical, cognitive, and consciousness-related implications of artificial intelligence, as evidenced by his arguments regarding AI consciousness [1], [2], the limitations of neuroscientific theories applied to AI [3], the risks of anthropomorphism in social AI [4], and the cognitive performance of AI systems [5].
Facts (5)
Sources
AI Sessions #9: The Case Against AI Consciousness (with Anil Seth) conspicuouscognition.com 4 facts
perspectiveHenry Shevlin argues that for artificial intelligence, determining the necessary conditions for consciousness is more relevant than determining sufficient conditions, because ruling out consciousness in artificial intelligence systems clarifies the ethical situation.
claimHenry Shevlin identifies the danger of anthropomorphism and anthropocentrism as a major ethical issue in AI, noting that humans may develop highly dependent relationships with social AI, leading to phenomena like AI psychosis.
claimHenry Shevlin asserts that AI systems have achieved human-level performance on a wide range of verbal reasoning tasks and can produce high-quality fiction, suggesting that the attribution of cognitive abilities to AI is not entirely a result of pareidolia.
claimHenry Shevlin asserts that while computational functionalism is one path to concluding that AI can be conscious, there are other types of functionalism that also support this conclusion.
Consciousness and AI - Open Encyclopedia of Cognitive Science oecs.mit.edu 1 fact
claimHenry Shevlin (2021) argues that it is questionable whether evidence for neuroscientific theories of consciousness, which is largely derived from studies on humans and primates, supports their extension to AI systems, particularly because these studies do not specify how similar features must be to suffice for consciousness.