claim
Large Language Models exhibit 'Prompt-Answer Alignment Bias,' where the phrasing of a question forces the model to generate an answer of the expected type even when the model lacks reliable knowledge.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept