Relations (1)

related 9.00 — strongly supporting 9 facts

Justification not yet generated — showing supporting facts

Facts (9)

Sources
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv 3 facts
claimIntegrating symbolic knowledge into neural network loss functions reinforces the connection between neural learning and symbolic reasoning in the contexts of model distillation, fine-tuning, pre-training, and transfer learning.
claimTransfer learning, which includes pre-training, fine-tuning, and few-shot learning, allows AI models to efficiently adapt knowledge from one task to another.
claimApproaches such as model distillation, fine-tuning, pre-training, and transfer learning align with the neuro-symbolic compiled paradigm by integrating symbolic constraints into the neural network learning process.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 1 fact
claimThe training stage of an LLM pipeline consists of two processes: pre-training, which forges foundational capabilities, and fine-tuning, which adapts the model.
Do LLMs Build World Representations? Probing Through ... neurips.cc NeurIPS 1 fact
claimFine-tuning and advanced pre-training strengthen the tendency of large language models to maintain goal-oriented abstractions during decoding, which prioritizes task completion over the recovery of the world's state and dynamics.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceResearch on integrating Large Language Models with Knowledge Graphs is categorized into several distinct approaches: Pre-training, Fine-Tuning, KG-Augmented Prompting, Retrieval-Augmented Generation (RAG), Graph RAG, KG RAG, Hybrid RAG, Spatial RAG, Offline/Online KG Guidelines, Agent-based KG Guidelines, KG-Driven Filtering and Validation, Visual Question Answering (VQA), Multi-Document QA, Multi-Hop QA, Conversational QA, Temporal QA, Multilingual QA, Index-based Optimization, and Natural Language to Graph Query Language (NL2GQL).
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 1 fact
perspectiveHallucination resistance in specialized medical contexts emerges from sophisticated reasoning capabilities, internal consistency mechanisms, and broad world knowledge developed during large-scale pretraining, rather than from domain-specific fine-tuning.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv 1 fact
procedureThe training process for Large Language Models (LLMs) generally consists of two stages: pre-training and fine-tuning.
The construction and refined extraction techniques of knowledge ... nature.com Nature 1 fact
claimLarge-scale pre-trained Large Language Models (LLMs) such as GPT-4 and LLaMA-3 utilize large-scale pretraining and task-specific fine-tuning to achieve cross-task generalization.