claim
Yang et al. demonstrated that knowledge graph-enhanced pre-trained language models (KGPLMs), which inject a knowledge encoder module into pre-trained language models, consistently exhibit longer running times than vanilla LLMs like BERT across pre-training, fine-tuning, and inference stages.
Authors
Sources
- Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- BERT concept
- Knowledge graph-enhanced large language models concept