contrastive learning
Facts (11)
Sources
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org May 10, 2024 5 facts
procedureContrastive learning as a mitigation strategy for large language model hallucinations involves training the models to distinguish between correct and incorrect information.
claimContrastive learning is an approach to mitigate LLM hallucinations by training large language models to distinguish between correct and incorrect information.
claimStrategies to mitigate hallucinations in large language models include using high-quality training data, employing contrastive learning, implementing human oversight, and utilizing uncertainty estimation.
claimOngoing research areas to address LLM hallucinations include contrastive learning, knowledge grounding, consistency modeling, and uncertainty estimation.
claimOngoing research to address LLM hallucinations includes techniques such as contrastive learning, knowledge grounding, consistency modeling, and uncertainty estimation.
Combining large language models with enterprise knowledge graphs frontiersin.org Aug 26, 2024 3 facts
claimContrastive learning methods can mimic and learn the principles of symbolic knowledge graphs and disambiguation systems, enabling a consistent and dynamic deep-learning approach to knowledge graph expansion.
procedureThe process for enriching the life sciences-oriented Sensigrafo knowledge graph involves the following steps: (1) marking entities in PubMed2 documents using the Cogito disambiguator, (2) generating possible relations using a distant supervision module grounded on Sensigrafo, (3) transforming documents into contextualized embeddings using a field-specific pre-trained language model like BioBERT, (4) performing adapter-based fine-tuning for relation extraction using contrastive learning, and (5) ranking predictions by model confidence.
claimMulti-instance learning is not data-efficient, leading to recent extensions into contrastive learning setups that aim to cluster sentences with the same relational triples and separate those with different triples in the semantic embedding space (Chen et al., 2021; Li et al., 2022a).
The construction and refined extraction techniques of knowledge ... nature.com Feb 10, 2026 1 fact
procedureThe training procedure for foundational representations in the framework involves three steps: (1) training using non-sensitive data and contrastive learning to link equipment traits with effectiveness metrics, (2) introducing rule verification incorporating constraint violation cases to guide convergence, and (3) using a scenario engine to generate mixed training samples while applying masking techniques to preserve task focus.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 1 fact
referenceThe SimKGC model (Wang L. et al., 2022) enhances entity representations by employing contrastive learning with in-batch, pre-batch, and self-negatives.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org Mar 12, 2026 1 fact
referenceThe paper 'Contranorm: a contrastive learning perspective on oversmoothing and beyond' (arXiv:2303.06562) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding contrastive learning.