claim
Targeted knowledge integration during pretraining can reduce blind spots in Large Language Models (LLMs), though maintaining up-to-date domain coverage remains a challenge (Feng et al., 2024).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- Pre-training concept
- knowledge integration concept