Relations (1)
cross_type 0.30 — supporting 3 facts
Large Language Models are related to Wikipedia as it serves as a primary data source for their pretraining [1], a knowledge base for enhancing their reasoning through frameworks like ReACT [2], and a structured knowledge graph used for specialized training in models like KnowLLMs [3].
Facts (3)
Sources
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 2 facts
procedureKnowLLMs (LLMs over KGs) train Large Language Models using knowledge graphs such as CommonSense, Wikipedia, and UMLS, with a training objective redefined as an autoregressive function coupled with pruning based on state-of-the-art KG embedding methods.
claimThe ReACT framework employs Wikipedia to address spurious generation and explanations in Large Language Models, though it relies on a prompting method rather than a well-grounded domain-specific approach.
Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com 1 fact
claimLarge language models lack a concept of source reliability because standard pretraining objectives treat all training data sources, such as Wikipedia articles, peer-reviewed papers, and social media posts, with equal weight per token.