procedure
The inference pipeline for the study used the HuggingFace transformers text-generation library and was executed in environments including Google Colab Pro (T4/A100), Kaggle GPU notebooks, and a local server with 8 × A6000 GPUs (48 GB VRAM per GPU).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- Hugging Face entity