Relations (1)

related 0.40 — supporting 4 facts

GPT-3 is explicitly categorized as a specific instance of Large Language Models in [1], [2], and [3], and it is documented as being susceptible to the same adversarial prompting vulnerabilities that affect the broader class of models in [4].

Facts (4)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimPretrained Large Language Models such as GPT-3, GPT-4, PaLM, LLaMA, and BERT have demonstrated advancements due to the extensive datasets used in their training.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
claimPrompt injection or adversarial prompting can override the attention of Large Language Models to previous instructions and force them to act on the current prompt, an issue that has affected GPT-3 (Branch et al. 2022).
Combining large language models with enterprise knowledge graphs frontiersin.org Frontiers 1 fact
claimLarge Language Models, such as GPT-3, struggle with specific information extraction tasks, including managing sentences that do not contain named entities or relations (Gutierrez et al., 2022).
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 1 fact
claimLarge language models (LLMs) are defined as models containing between ten billion and one hundred billion parameters, with examples including GPT-3 and PaLM.