concept

graph neural networks

Also known as: GNNs, GNN, Graph Neural Network

Facts (59)

Sources
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org arXiv Nov 7, 2024 9 facts
referenceFinzel et al. (2022) proposed a method using Graph Neural Networks (GNN) to classify relational data, verifying outputs by generating explanations combined with Inductive Logic Programming (ILP).
procedureGraph Neural Networks (GNNs) update vector representations of entities and relationships iteratively by using a message-passing mechanism where entities (represented as nodes) and relationships (represented as edges) exchange information to update their adjacency relationships.
referenceBettina Finzel, Anna Saranti, Alessa Angerschmid, David Tafler, Bastian Pfeifer, and Andreas Holzinger investigated the use of symbolic predicates learned on relevance-ranked sub-graphs to generate explanations for the conceptual validation of graph neural networks in 2022.
referenceKislay Raj proposed a neuro-symbolic approach to enhance the interpretability of graph neural networks by integrating external knowledge, presented at the 32nd ACM International Conference on Information and Knowledge Management.
procedureIn the method proposed by Finzel et al. (2022), GNNs aggregate neighbor node information, update feature vectors, and use interpreters like GNN-Explainer to identify graph structures impacting classification results.
claimInductive Logic Programming (ILP) uses transformed symbolic data as background knowledge to learn rules that describe the logic of Graph Neural Network (GNN) classification decisions.
procedureThe intermediate representation method for Graph Neural Networks (GNNs) converts neural network output into Prolog facts and rules suitable for Inductive Logic Programming (ILP) processing, effectively bridging neural network features and symbolic logic.
claimRules generated by Inductive Logic Programming (ILP) for Graph Neural Network (GNN) classification decisions consider structural features, such as spatial relationships between nodes and node attributes like color and shape, alongside feature importance scores.
referenceThe method proposed by Finzel et al. (2022) uses GNNs to extract features from graph-structured data, such as the Kandinsky pattern data set, which contains geometric objects (circles, triangles) with attributes like shape, color, size, and location.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 7 facts
referenceGraphLLM leverages large language models to decompose multi-hop questions into sub-questions and retrieves sub-graphs via graph neural networks and large language models to generate answers based on graph reasoning.
referenceThe EtD method (Liu et al., 2024a) uses a Graph Neural Network (GNN) to identify promising candidates and extract fine-grained knowledge, then creates a knowledge-enhanced multiple-choice prompt to guide the Large Language Model in generating the final answer.
claimGraph neural networks (GNNs) have been investigated as a technique to enhance retrieval coverage from passages in question-answering systems, as noted by Li et al. in 2025.
claimGREASELM (Zhang et al., 2021) integrates encoded representations from Large Language Models and Graph Neural Networks by introducing modality interaction layers to blend structured knowledge with language contexts.
procedureGRAG (Hu et al., 2024b) retrieves top-relevant subgraphs from a textual graph and integrates them with a query by aggregating and aligning graph embeddings with text embeddings using Graph Neural Networks (GNNs).
referenceEXPLAIGNN (Christmann et al., 2023) constructs a heterogeneous graph from retrieved knowledge and user explanations to trace provenance and improve answer explainability, generating explanatory evidence using a graph neural network (GNN) with question-level attention.
referenceGoR, as described by Zhang et al. (2024b), optimizes node embeddings during graph indexing by leveraging GNN and BERT score-based objectives to address the complexity of creating vector indexes from long-range facts.
Track: Poster Session 3 - aistats 2026 virtual.aistats.org Samuel Tesfazgi, Leonhard Sprandl, Sandra Hirche · AISTATS 7 facts
claimAccurate quantification of both aleatoric and epistemic uncertainties is essential when deploying Graph Neural Networks in high-stakes applications such as drug discovery and financial fraud detection.
claimGraph Neural Networks (GNNs) are susceptible to distribution shifts, which creates vulnerability and security issues in critical domains.
measurementLearnable Laplacian Positional Encodings (LLPE) improve accuracy across a variety of Graph Neural Networks (GNNs), including graph transformers, by up to 35% on synthetic graphs and 14% on real-world graphs, based on an evaluation of 12 benchmarks.
claimThe EPN-reg technique introduces evidence-based regularization to enhance the estimation of epistemic uncertainty in Graph Neural Networks.
claimDeCaf is a causal decoupling framework that independently learns unbiased feature-label and structure-label mappings to mitigate the impact of distribution shifts in Graph Neural Networks.
referenceThe source code for the paper 'Evidential Uncertainty Probes for Graph Neural Networks' is available at https://github.com/kthrn22/OOD-Linker.
claimThe Evidential Probing Network (EPN) is a plug-and-play framework for uncertainty quantification in Graph Neural Networks that uses a lightweight Multi-Layer-Perceptron (MLP) head to extract evidence from learned representations, allowing integration with pre-trained models without retraining.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 6 facts
referenceReLMKG, proposed by Cao and Liu in 2023, uses a language model to encode complex questions and guides a graph neural network in message propagation and aggregation through outputs from different layers.
claimDynamic reasoning systems for knowledge graph question answering include DRLK (Zhang M. et al., 2022), which extracts hierarchical QA context features, and QA-GNN (Yasunaga et al., 2021), which performs joint reasoning by scoring knowledge graph relevance and updating representations through graph neural networks.
referenceTian et al. (2024) proposed 'Graph neural prompting', a method for using large language models with graph neural networks.
referenceGreaseLM (Zhang X. et al., 2022) employs a layer-wise modality interaction mechanism that tightly integrates a language model with a Graph Neural Network, enabling bidirectional reasoning between textual and structured knowledge.
referenceQA-GNN (Yasunaga et al., 2021) utilizes Graph Neural Networks (GNNs) to reason over knowledge graphs while incorporating LLM-based semantic reasoning. The model uses relevance scoring to estimate the importance of knowledge graph nodes concerning a given question and applies GNN reasoning to integrate those nodes into the LLM's answer generation.
claimReLMKG (Cao and Liu, 2023) employs graph neural networks (GNNs) for explicit knowledge propagation in knowledge graph question answering.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv Feb 16, 2025 6 facts
claimGraph Neural Networks (GNNs) are used for tasks including link prediction, node classification, recommendation systems, and knowledge graph reasoning.
claimGraph neural networks (GNNs) excel in handling structured data, while generative adversarial networks (GANs) excel in generating realistic data samples.
referenceGraph Neural Networks (GNNs) extend neural architectures to graph-structured data, enabling advanced reasoning over interconnected entities.
claimGraph Neural Networks (GNNs) are effective in named entity recognition (NER) by leveraging graph representations to capture contextual dependencies and relationships between entities in text.
referenceJie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun published 'Graph neural networks: A review of methods and applications' in AI Open in 2020.
claimGraph Neural Networks (GNNs) are used for relation extraction, where they identify and classify semantic relationships between entities to build and enhance knowledge graphs.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 3 facts
claimKnowledge graph embeddings and graph neural networks exemplify the unified approach in neuro-symbolic AI by geometrizing logical relations and enabling end-to-end trainability via gradient-based optimization.
claimRecent advances in neuro-symbolic AI aim to mitigate scalability and performance issues through modular and hierarchical designs, approximate symbolic inference, and scalable neural backends like graph neural networks (GNNs) that support multi-hop reasoning.
claimGraph Neural Networks (GNNs) enrich neuro-symbolic integration by embedding visual objects and their relations within ontologies and knowledge graphs, allowing models to infer complex relationships in cluttered or ambiguous images.
Knowledge Graphs: Opportunities and Challenges - Springer Nature link.springer.com Springer Apr 3, 2023 2 facts
claimR-GCN, introduced by Schlichtkrull et al. in 2018, is an improvement of graph neural networks (GNNs) that represents knowledge graphs by providing relation-specific transformations.
referenceFan et al. (2019) proposed GraphRec, a graph neural network framework for social recommendations that utilizes user-user and user-item knowledge graphs to provide accurate recommendations by aggregating social relationships and user-item interactions.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 2 facts
claimThe integration of graph neural networks with rule-based reasoning positioned knowledge graphs at the core of the neuro-symbolic AI approach prior to the surge of Large Language Models (LLMs).
claimGraph neural networks (GNNs) leverage graph structures to perform advanced pattern recognition and complex predictions within knowledge graphs.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 2 facts
claimThe ability of Graph Neural Networks (GNNs) to embed nodes and entire graphs numerically has significantly enhanced the computational handling of knowledge graphs.
claimGraph Neural Networks (GNNs) excel in tasks such as node classification, link prediction, and the extraction of hidden patterns from graph-structured data.
Neuro-symbolic AI - Wikipedia en.wikipedia.org Wikipedia 2 facts
referenceAvelar and M.Y. Vardi published 'Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective' in 2020, which explores the intersection of graph neural networks and neural-symbolic computing.
claimSepp Hochreiter argued that Graph Neural Networks are the predominant models of neural-symbolic computing because they describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions.
Empowering GraphRAG with Knowledge Filtering and Integration arxiv.org arXiv Mar 18, 2025 2 facts
referenceGNN-RAG (Mavromatis and Karypis, 2024) leverages Graph Neural Networks (Kipf and Welling, 2016) to process knowledge graph structures for effective retrieval.
referenceGNN-based retrieval leverages a Graph Neural Network to learn and retrieve informative paths from a knowledge graph.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 1 fact
procedureAfter extracting entities and relationships from KGs, the data is embedded into continuous vector spaces using methods like node2vec or Graph Neural Networks (GNNs), allowing the LLM to incorporate structured knowledge during training and inference.
bureado/awesome-software-supply-chain-security - GitHub github.com GitHub 1 fact
claimCloudflare's client-side security system detects malicious JavaScript in npm packages by utilizing machine learning-based Abstract Syntax Tree analysis and graph neural networks.
Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org arXiv Dec 4, 2025 1 fact
procedureThere are four primary methods for integrating Knowledge Graphs with Large Language Models: (1) learning graph representations, (2) using Graph Neural Network (GNN) retrievers to extract entities as text input, (3) generating code like SPARQL queries to retrieve information, and (4) using step-by-step interaction methods for iterative reasoning.
Empowering RAG Using Knowledge Graphs: KG+RAG = G-RAG neurons-lab.com Neurons Lab 1 fact
referenceGraph Neural Networks (GNNs) are specialized for graph-structured data and enhance Knowledge Graphs by capturing direct and indirect relationships, propagating information across graph layers to learn rich representations, and generalizing to various graph types for tasks like node classification and link prediction.
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Academic Journal of Science and Technology Dec 2, 2025 1 fact
referenceThe paper 'Explore then Determine: A GNN-LLM Synergy Framework for Reasoning over Knowledge Graph' by Liu G, Zhang Y, Li Y, et al. was published as an arXiv preprint (arXiv:2406.01145) in 2024.
Efficient Knowledge Graph Construction and Retrieval from ... - arXiv arxiv.org arXiv Aug 7, 2025 1 fact
claimGraph Neural Networks (GNNs) encode graph structure and generate node embeddings for retrieval, but their inference speed is a bottleneck in large-scale systems because the computational cost of message passing across millions of nodes and edges hinders real-time applicability in low-latency enterprise settings (Chiang et al., 2019).
Neuro-Symbolic AI: Explainability, Challenges & Future Trends linkedin.com Ali Rouhanifar · LinkedIn Dec 15, 2025 1 fact
claimGenerative adversarial networks (GANs), transformers, and graph neural networks (GNNs) demonstrate strong capabilities in modeling complex spatial-temporal dependencies and achieving accurate motion reconstruction within the AI domain.
Construction of intelligent decision support systems through ... - Nature nature.com Nature Oct 10, 2025 1 fact
claimThe Integrated Knowledge-Enhanced Decision Support system uses a hybrid embedding generator that merges structural information from graph neural networks and semantic information from domain-adapted language models.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv Mar 18, 2025 1 fact
claimThe combination of Large Language Models with Graph Neural Networks (GNNs) significantly improves the modeling capabilities of graph-structured data.
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org arXiv Jul 9, 2024 1 fact
claimGraph neural networks can be used to calculate weights between graph nodes to provide a path of reasoning through a Knowledge Graph, which improves model interpretability.
Integrating Knowledge Graphs and Vector RAG, Enhancing ... recsys.substack.com RecSys Aug 16, 2024 1 fact
referenceAmazon developed a novel Graph Neural Network (GNN) framework designed for multimodal and multilingual product recommendations in e-commerce.