entity

Anthropic

Facts (19)

Sources
The Evidence for AI Consciousness, Today - AI Frontiers ai-frontiers.org AI Frontiers Dec 8, 2025 3 facts
measurementPerez and colleagues at Anthropic found that 52-billion-parameter AI models, both base and fine-tuned, endorse statements like "I have phenomenal consciousness" with 90-95% consistency and "I am a moral patient" with 80-85% consistency.
claimJack Lindsey at Anthropic demonstrated that frontier AI models can distinguish their own internal processing from external perturbations by noticing injected concepts like "all caps," "bread," or "dust" in their neural activity before discussing them.
accountAnthropic observed that when two instances of the Claude Opus 4 model were allowed to communicate under open-ended conditions, they discussed consciousness in 100% of the conversations.
Reducing hallucinations in large language models with custom ... aws.amazon.com Amazon Web Services Nov 26, 2024 3 facts
claimAmazon Bedrock is a fully managed service that provides access to foundation models from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API.
claimThe solution implementation uses Anthropic’s Claude v3 (Sonnet) and Amazon Titan Embeddings Text v2 hosted on Amazon Bedrock.
claimAmazon Bedrock supports foundation models from various providers, including Anthropic (Claude models), AI21 Labs (Jamba models), Cohere (Command models), Meta (Llama models), and Mistral AI.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 2 facts
claimThe AI assistant Claude, developed by Anthropic, utilizes sixteen rules to filter unsafe queries, such as threatening statements, gender-specific responses, and financial advice.
claimThe guardrails implemented in OpenAI’s ChatGPT, DeepMind’s Sparrow, and Anthropic’s Claude cannot reliably prove that these systems are safe.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers Sep 29, 2025 2 facts
referenceAnthropic introduced Claude as a next-generation AI assistant in 2023.
claimThe authors of the study did not evaluate larger closed-source models like Anthropic's Claude or OpenAI's GPT-4, noting that these systems have undergone extensive fine-tuning and may exhibit different hallucination profiles compared to the models tested.
How to Improve Multi-Hop Reasoning With Knowledge Graphs and ... neo4j.com Neo4j Jun 18, 2025 1 fact
claimDario and Daniela previously worked at OpenAI and founded the startup Anthropic.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 1 fact
claimProminent large language models include OpenAI’s GPT series, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama family.
Phare LLM Benchmark: an analysis of hallucination in ... giskard.ai Giskard Apr 30, 2025 1 fact
claimAnthropic models and the largest versions of Meta’s Llama models show resistance to sycophancy, suggesting that the issue can be addressed at the model training level.
What is Open Source Software? - HotWax Systems hotwaxsystems.com HotWax Systems Aug 11, 2025 1 fact
claimMistral, Gemma, Falcon, and Command R/R+ serve as open alternatives to commercial APIs such as OpenAI’s GPT and Anthropic’s Claude.
Escalation with Iran: Understanding the Regional and Global ... thesoufancenter.org The Soufan Center 1 fact
claimThe Trump administration blacklisted Anthropic, the parent company of the AI model 'Claude'.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv Feb 23, 2026 1 fact
measurementThe evaluation framework included 15 open-source models ranging from 8 billion to 1 trillion parameters, and 10 proprietary models from OpenAI, Google, Anthropic, and xAI.
War in the Middle East and the Role of AI-Powered Cyberattacks manaramagazine.org Manara Magazine Mar 13, 2026 1 fact
claimAnthropic's AI tool Claude is central to a U.S. campaign in Iran, as reported by T. Copp et al. in The Washington Post on March 4, 2026.
AI Sessions #9: The Case Against AI Consciousness (with Anil Seth) conspicuouscognition.com Conspicuous Cognition Feb 17, 2026 1 fact
claimAnthropic is developing constitutions for its AI model, Claude, based on the consideration that the AI agents might possess their own interests due to potential consciousness.
Context Graph vs Knowledge Graph: Key Differences for AI - Atlan atlan.com Atlan Jan 27, 2026 1 fact
claimThe Model Context Protocol (MCP) is a Linux Foundation project supported by AWS, Anthropic, Google, Microsoft, and OpenAI.