Relations (1)

related 2.00 — strongly supporting 3 facts

These concepts are related as distinct prompt engineering techniques compared in performance studies [1] and categorized within the same prompt engineering protocol [2]. Furthermore, their comparative effectiveness is the central subject of research papers investigating whether zero-shot prompting can outperform chain-of-thought methods [3].

Facts (3)

Sources
LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai Zylos 1 fact
measurementChain-of-Verification (CoVe) improves F1 scores by 23% (from 0.39 to 0.48) and outperforms Zero-Shot, Few-Shot, and Chain-of-Thought methods, though it does not eliminate hallucinations in complex reasoning chains.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 1 fact
procedureThe prompt engineering protocol used in the study involves five categories: Zero-shot (basic instruction), Few-shot (2-3 input-output examples), Instruction (structured natural language), Chain-of-thought (step-by-step reasoning), and Vague/misleading (intentionally unclear).
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 1 fact
referenceThe paper 'Revisiting chain-of-thought prompting: zero-shot can be stronger than few-shot' is an arXiv preprint, identified as arXiv:2506.14641.