claim
Yang et al. demonstrated that knowledge graph-enhanced pre-trained language models (KGPLMs), which inject a knowledge encoder module into pre-trained language models, consistently exhibit longer running times than vanilla LLMs like BERT across pre-training, fine-tuning, and inference stages.

Authors

Sources

Referenced by nodes (3)