reference
Huang et al. introduced in-context learning distillation, a method that transfers few-shot learning capabilities from large pre-trained LLMs to smaller models.

Authors

Sources

Referenced by nodes (1)