Amazon Bedrock Knowledge Bases
Facts (13)
Sources
Evaluating RAG applications with Amazon Bedrock knowledge base ... aws.amazon.com Mar 14, 2025 11 facts
claimAmazon Bedrock Knowledge Bases evaluation measures generation quality using metrics for correctness, faithfulness (to detect hallucinations), and completeness.
claimAmazon Bedrock Knowledge Bases evaluation features allow developers to systematically evaluate both retrieval and generation quality in RAG systems to adjust build-time or runtime parameters.
claimAmazon Bedrock Knowledge Bases evaluation supports both ground truth and reference-free evaluation methods.
claimAmazon Bedrock launched two evaluation capabilities: LLM-as-a-judge (LLMaaJ) under Amazon Bedrock Evaluations and a RAG evaluation tool for Amazon Bedrock Knowledge Bases.
measurementAmazon Bedrock Knowledge Bases evaluation metrics are normalized to a range between 0 and 1.
claimThe LLM-as-a-judge (LLMaaJ) and RAG evaluation tool for Amazon Bedrock Knowledge Bases both utilize LLM-as-a-judge technology to combine the speed of automated methods with human-like nuanced understanding.
claimAmazon Bedrock Knowledge Bases evaluation supports the assessment of fine-tuned or distilled models.
claimAmazon Bedrock Knowledge Bases evaluation incorporates built-in responsible AI metrics, including harmfulness, answer refusal, and stereotyping, and integrates with Amazon Bedrock Guardrails.
claimAmazon Bedrock Knowledge Bases evaluation provides natural language explanations for each score generated in the output and on the console.
procedureAmazon Bedrock Knowledge Bases evaluation uses an LLM as a judge to assess retrieval metrics, specifically context relevance and coverage.
procedureThe Amazon Bedrock Knowledge Bases RAG evaluation workflow consists of six steps: preparing a prompt dataset (optionally with ground truth), converting the dataset to JSONL format, storing the file in an Amazon S3 bucket, running the Amazon Bedrock Knowledge Bases RAG evaluation job (which integrates with Amazon Bedrock Guardrails), generating an automated report with metrics, and analyzing the report for system optimization.
Reducing hallucinations in large language models with custom ... aws.amazon.com Nov 26, 2024 2 facts
procedureThe RAG-based chatbot solution architecture involves the following steps: (1) Data ingestion involving raw PDFs stored in an Amazon Simple Storage Service (Amazon S3) bucket synced as a data source with Amazon Bedrock Knowledge Bases; (2) The user asks a question; (3) The Amazon Bedrock agent creates a plan and identifies the need to use a knowledge base; (4) The agent sends a request to the knowledge base, which retrieves relevant data from the underlying vector database (Amazon OpenSearch Serverless); (5) The agent retrieves an answer through RAG.
claimThe Amazon Bedrock Agents implementation for hallucination reduction incurs no separate charges for building resources using Amazon Bedrock Knowledge Bases or Amazon Bedrock Agents, but users are charged for embedding model and text model invocations on Amazon Bedrock, as well as for Amazon S3 and vector database usage.