reference
Yukun Huang, Yanda Chen, Zhou Yu, and Kathleen McKeown published 'In-context learning distillation: Transferring few-shot learning ability of pre-trained language models' as an arXiv preprint (arXiv:2212.10670) in 2022.
Authors
Sources
- Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org via serper
Referenced by nodes (2)
- ArXiv concept
- pre-trained language models concept