concept

Knowledge Graph completion

Also known as: Knowledge graph completion models, knowledge graph completion methods

Facts (53)

Sources
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 21 facts
referenceThe paper 'Making large language models perform better in knowledge graph completion' was published as an arXiv preprint in 2023.
referenceWang et al. (2022) developed 'SIMKGC', a simple contrastive knowledge graph completion method utilizing pre-trained language models.
referenceLiu et al. (2024) proposed finetuning generative large language models with discrimination instructions for knowledge graph completion in a paper published at the International Semantic Web Conference.
claimLarge Language Models (LLMs) intrinsically blend memorized knowledge with inferred predictions during knowledge graph completion, making it difficult to distinguish between the two.
claimPrompt-based methods for knowledge graph completion, such as ProLINK and TAGREAL, cannot fully resolve the fundamental ambiguity between factual recall and genuine inference, which is a significant limitation in healthcare applications where provenance is critical.
claimEmbedding-based metrics for knowledge graph completion can assign high confidence scores to factually incorrect triples, such as (Einstein, won, Nobel Prize in Chemistry).
claimPrompt engineering methods for knowledge graph completion, such as ProLINK and TAGREAL, suffer from information loss because they must split complex entity names into subword fragments.
referenceB. Kim, T. Hong, Y. Ko, and J. Seo published 'Multi-task learning for knowledge graph completion with pre-trained language models' in the Proceedings of the 28th International Conference on Computational Linguistics in 2020.
referenceXie et al. (2022) presented a generative transformer-based approach for knowledge graph completion titled 'From discrimination to generation: Knowledge graph completion with generative transformer'.
referenceThe GenKGC model (Xie et al., 2022) leverages pre-trained language models to convert the knowledge graph completion task into a sequence-to-sequence generation task.
referenceRule-based systems like AMIE (Galárraga et al., 2013) can identify errors in knowledge graph completion through predefined constraints but struggle with open-domain scenarios where rules are incomplete.
referenceMou et al. (2024) proposed a self-reflective model where GPT-4 reflects on errors made in a given example and generates linguistic feedback to guide the model in avoiding similar mistakes during knowledge graph completion.
referenceWang et al. (2021) proposed 'Structure-augmented text representation learning' for efficient knowledge graph completion.
referenceThe MEM-KGC model (Choi et al., 2021) performs knowledge graph completion by masking the tail entity and using the head entity and relation as context to predict the missing tail entity, a process similar to Masked Language Modeling (MLM).
claimMulti-task learning approaches for knowledge graph completion, such as MT-DNN and LP-BERT, fail to resolve the fundamental scalability gap in large-scale knowledge graphs, where latency grows polynomially with graph density.
referenceKC-GenRe, proposed by Wang Y. et al. in 2024, transforms the knowledge graph completion re-ranking task into a candidate ranking problem solved by a generative LLM and addresses missing issues using a knowledge-enhanced constraint reasoning method.
referenceWang et al. (2024) introduced 'KC-GENRE', a knowledge-constrained generative re-ranking method based on large language models for knowledge graph completion.
claimExisting evaluation metrics for knowledge graph completion often prioritize surface-level correctness over logical consistency.
referenceThe paper 'Mem-kgc: masked entity model for knowledge graph completion with pre-trained language model' (IEEE Access 9, 132025–132032) introduces a masked entity model approach for knowledge graph completion using pre-trained language models.
perspectiveHierarchical evaluation frameworks, such as applying strict symbolic verification only to high-risk predictions, are a potential direction for improving knowledge graph completion evaluation.
claimMasking techniques for knowledge graph completion, such as MEM-KGC, preserve full entity integrity by using [MASK] tokens, avoiding the information loss associated with subword splitting.
Knowledge Graphs: Opportunities and Challenges - Springer Nature link.springer.com Springer Apr 3, 2023 14 facts
referenceLin Y, Liu Z, Sun M et al. published 'Learning entity and relation embeddings for knowledge graph completion' in the proceedings of the Twenty-ninth AAAI Conference on Artificial Intelligence in 2015.
procedureKnowledge graph completion models train machine learning models on existing graphs to assess the plausibility of new candidate triplets and add those with high plausibility to the graph.
claimThe challenges in developing knowledge graphs are categorized into the limitations of five topical technologies: knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
claimKnowledge graph completion models have been utilized in domains including digital libraries (Yao et al. 2017), biomedical research (Harnoune et al. 2021), social media (Abu-Salih 2021), and scientific research (Nayyeri et al. 2021).
referenceRen et al. (2022) published 'Smore: Knowledge graph completion and multi-hop reasoning in massive knowledge graphs' in the Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, which addresses the challenges of knowledge graph completion and reasoning at scale.
claimStandard knowledge graph completion methods assume knowledge graphs are static and fail to capture their dynamic evolution.
claimKnowledge graph completion aims to improve the quality of knowledge graphs by predicting additional relationships and entities, as most knowledge graphs currently lack comprehensive representations of all knowledge in a field.
procedureKnowledge graph completion aims to expand existing knowledge graphs by adding new triplets using techniques for link prediction (Wang et al. 2020b; Akrami et al. 2020) and entity prediction (Ji et al. 2021).
procedureKnowledge graph completion typically utilizes link prediction techniques to generate triplets and subsequently assigns plausibility scores to those triplets (Ji et al. 2021).
claimSignificant technical challenges in knowledge graph development involve limitations in five representative technologies: knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
claimMost current knowledge graph completion methods are limited to closed-world data sources, meaning they require entities or relations to already exist in the knowledge graph to generate new triplets.
claimYao L, Mao C, Luo Y published the paper 'Kg-bert: Bert for knowledge graph completion' as an arXiv preprint in 2019.
claimOpen-world techniques for knowledge graph completion are emerging to extract potential objects from outside of existing knowledge bases.
claimHuman supervision is currently considered the gold standard for evaluating knowledge graph completion, according to Ballandies and Pournaras (2021).
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org arXiv 8 facts
claimKnowledge Graph completion is the task of adding new entries, such as nodes, relations, and properties, to a knowledge graph using existing relations.
claimCurrent approaches for Knowledge Graph completion typically limit themselves to a single task, such as determining missing type information, missing relations (link prediction), or missing attribute values, and lack holistic solutions to simultaneously improve the quality of knowledge graphs in several areas.
referenceA. Saeedi, E. Peukert, and E. Rahm authored 'Incremental Multi-source Entity Resolution for Knowledge Graph Completion', presented at the European Semantic Web Conference in 2020.
referenceCoDEx is a comprehensive benchmark designed for knowledge graph completion, introduced by T. Safavi and D. Koutra in 2020.
claimQuality Assurance and knowledge graph completion steps are not required for every knowledge graph update and may be executed asynchronously within separate pipelines.
referenceThe FAMER system, which won the DI2KG Challenge, is used for knowledge graph completion as described by D. Obraczka, A. Saeedi, and E. Rahm in their 2019 paper presented at the 1st International Workshop on Challenges and Experiences from Data Integration to Knowledge Graphs.
claimPaulheim's survey distinguishes between internal and external methods for Knowledge Graph completion: internal approaches rely solely on the knowledge graph as input, while external methods incorporate additional data like text corpora and human knowledge sources such as crowdsourcing.
claimExisting benchmarks for knowledge graph construction are currently limited to individual tasks such as knowledge extraction, ontology matching, entity resolution, and knowledge graph completion.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 3 facts
claimBenchmarks like SimpleQuestions and FreebaseQA provide standardized datasets and evaluation metrics for consistent and comparative assessment of LLMs integrated with knowledge graphs, covering tasks such as natural language understanding, question answering, commonsense reasoning, and knowledge graph completion.
referenceMoon, Jones, and Samatova authored 'Learning entity type embeddings for knowledge graph completion', published in the Proceedings of the 2017 ACM on Conference on Information and Knowledge Management.
referenceWikiKG90M is a large-scale benchmark used to evaluate knowledge graph completion tasks, specifically link prediction and entity classification.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv Mar 18, 2025 2 facts
referenceConventional methods for Knowledge Graph completion, such as TransE, compute embeddings for entities and relationships to enhance the comprehensiveness of Knowledge Graphs for tasks like information retrieval and logical question answering.
claimLarge Language Models (LLMs) contribute to knowledge graph completion, specifically aiding in downstream tasks such as node classification and link prediction.
Knowledge Graphs: Opportunities and Challenges dl.acm.org ACM Digital Library 1 fact
claimThe authors of the paper 'Knowledge Graphs: Opportunities and Challenges' identify knowledge graph completion as a severe technical challenge in the field of knowledge graphs.
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org arXiv Jul 9, 2024 1 fact
referenceYanbin Wei, Qiushi Huang, Yu Zhang, and James Kwok authored the 2023 paper 'Kicgpt: Large language model with knowledge in context for knowledge graph completion', published in the Findings of the Association for Computational Linguistics: EMNLP 2023.
Knowledge Graphs: Opportunities and Challenges - arXiv arxiv.org arXiv Mar 24, 2023 1 fact
claimThe technical challenges in the field of knowledge graphs include knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
The construction and refined extraction techniques of knowledge ... nature.com Nature Feb 10, 2026 1 fact
referenceThe multi-level KBGC model is a knowledge graph completion method applied to high-speed railway turnout maintenance, published in MDPI Actuators in 2024.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 1 fact
claimThe reasoning for learning paradigm enhances interpretability, sample efficiency, and safety in learning, particularly in domains where logical consistency is critical, such as knowledge graph completion, autonomous systems, and medical diagnostics.