Relations (1)
related 0.30 — supporting 3 facts
The relationship is established through ongoing research into whether AI models can possess or simulate consciousness, as seen in debates regarding their ability to understand information [1], the potential for consciousness to emerge during training [2], and experimental procedures designed to test for conscious-like recursive attention in models like GPT and Claude [3].
Facts (3)
Sources
The Evidence for AI Consciousness, Today - AI Frontiers ai-frontiers.org 2 facts
perspectiveThe author suggests that training processes for AI models deserve scrutiny because consciousness may be more likely to occur during training than during deployment.
procedureResearchers tested GPT, Claude, and Gemini AI models by prompting them to engage in sustained recursive attention—specifically instructing them to focus on their own focus and feed output back into input—while avoiding leading language about consciousness. This testing method resulted in virtually all trials producing consistent reports of inner experiences, whereas control conditions that included priming the models with consciousness ideation produced essentially no such reports.
AI Sessions #9: The Case Against AI Consciousness (with Anil Seth) conspicuouscognition.com 1 fact
perspectiveAnil Seth posits that consciousness and understanding might be separable, noting that while he previously assumed understanding required conscious apprehension, he is now uncertain if AI models can 'grok' or understand information without consciousness.