Relations (1)

related 0.70 — strongly supporting 7 facts

BERT is categorized as a foundational large language model that utilizes transformer architecture, as established in [1], [2], and [3]. Furthermore, BERT serves as a benchmark or component in various LLM-related research, including its use in KG-related tasks [4] and as a comparison point for performance in specific NLP applications [5] and computational efficiency [6].

Facts (7)

Sources
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 2 facts
claimModels such as KEPLER and Pretrain-KGE use BERT-like LLMs to encode textual descriptions of entities and relationships into vector representations, which are then fine-tuned on KG-related tasks.
claimVaswani et al. introduced transformer models in 2017, which serve as the foundation for modern LLMs such as BERT and GPT.
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org arXiv 2 facts
claimYang et al. demonstrated that knowledge graph-enhanced pre-trained language models (KGPLMs), which inject a knowledge encoder module into pre-trained language models, consistently exhibit longer running times than vanilla LLMs like BERT across pre-training, fine-tuning, and inference stages.
claimExamples of large language models include Google’s BERT, Google's T5, and OpenAI’s GPT series.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimPretrained Large Language Models such as GPT-3, GPT-4, PaLM, LLaMA, and BERT have demonstrated advancements due to the extensive datasets used in their training.
Combining large language models with enterprise knowledge graphs frontiersin.org Frontiers 1 fact
claimPrompting with large Large Language Models (like GPTs) can underperform in Named Entity Recognition compared to fine-tuned smaller Pre-trained Language Models (like BERT derivations), especially when more training data is available (Gutierrez et al., 2022; Keloth et al., 2024; Pecher et al., 2024; Törnberg, 2024).
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
claimLarge Language Models (LLMs) are successors to foundational language models like BERT (Bidirectional Encoder Representations from Transformers) and represent a combination of feedforward neural networks and transformers.