Relations (1)
related 0.20 — supporting 2 facts
Large Language Models and pre-trained language models are related as they are both foundational architectures in natural language processing, with [1] discussing their integration as plug-and-play components and [2] comparing their relative performance in specific tasks like Named Entity Recognition.
Facts (2)
Sources
Combining large language models with enterprise knowledge graphs frontiersin.org 2 facts
claimPrompting with large Large Language Models (like GPTs) can underperform in Named Entity Recognition compared to fine-tuned smaller Pre-trained Language Models (like BERT derivations), especially when more training data is available (Gutierrez et al., 2022; Keloth et al., 2024; Pecher et al., 2024; Törnberg, 2024).
perspectiveTo adapt to evolving Large Language Models (LLMs), Pre-trained Language Models (PLMs) should be treated as plug-and-play components to ensure versatility and longevity.