concept

deep learning

synthesized from dimensions

Deep learning is a dominant paradigm within artificial intelligence that utilizes multi-layered neural network architectures to learn complex patterns from vast amounts of data. As articulated by Yoshua Bengio, Yann LeCun, and Geoffrey Hinton, it represents a core approach to modern machine learning. Its historical foundations trace back to early connectionist efforts, such as Frank Rosenblatt’s 1958 Perceptron source(/facts/6f377809-7f79-4080-8fea-aefa63df1ab7), and were significantly propelled in the 1980s by the development of the backpropagation algorithm by David Rumelhart, Geoffrey Hinton, and Ronald J. Williams.

The operational strength of deep learning lies in its ability to process unstructured data—such as text, images, and mass spectrometry data—to perform tasks ranging from information extraction and named entity recognition to complex ad retrieval and compound structure prediction. Modern architectures are frequently shaped by paradigms like next-token prediction source(/facts/b7820f4b-6f41-43b0-b919-8e24758ab34a), and the use of multi-task structures has been shown to enable stronger generalization in overparameterized regimes.

Despite its efficacy, deep learning is often characterized as a "black box," which complicates explainability source(/facts/1fab2e35-ab06-4a98-90b7-eef509a3e89e). Experts frequently note that its reasoning is largely unverifiable, and the probabilistic nature of these models can lead to hallucinations source(/facts/1a8d3d19-f4a3-4613-803d-048417df1a08) and performance degradation due to spurious correlations source(/facts/7a1317a9-5a7f-49b0-bc4c-3281024ab14f). Furthermore, unlike symbolic AI, deep learning typically requires vast amounts of labeled data source(/facts/6d80c346-7166-4619-87a6-695dcd4037d8).

To address these limitations, researchers are actively pursuing methods for uncertainty quantification, such as dropout as Bayesian approximation or other Bayesian methods that provide full predictive distributions source(/facts/990e346d-0bd3-48e6-ace0-59f65e233c4a). There is also a significant push toward neuro-symbolic AI, which seeks to combine the pattern recognition capabilities of deep learning (System 1) with the structured reasoning of symbolic AI (System 2) source(/facts/6440d68c-a62b-45bf-ba32-2cfda463543d).

Other active areas of research include integrating knowledge-guided neural networks, physics-informed frameworks, and relational inductive biases via graph networks. These efforts aim to reconcile deep learning with causal modeling source(/facts/aa0a054e-1bab-4cf7-a858-7fca2442124f) and improve our understanding of phenomena like double descent source(/facts/65b95bfb-c130-4b14-aa46-2a39869663ad) and model memorization source(/facts/0776e2f5-5b6d-4e51-a6cd-ad747a9304b6).

Model Perspectives (2)
openrouter/x-ai/grok-4.1-fast definitive 87% confidence
Deep learning represents a dominant paradigm in artificial intelligence applications Heriot-Watt University, originating from foundational work like Frank Rosenblatt's 1958 Perceptron arXiv. According to IEEE claims, it has revolutionized knowledge graph construction and reasoning processes IEEE. Deep learning powers large language models that process unstructured data like text and images to learn patterns and predict Cutter Consortium, with Amazon's Sponsored Products team applying it alongside generative AI for ad retrieval Amazon Science. However, models act as black boxes complicating explainability Cogent Infotech, exhibit hallucinations from probabilistic nature The Journal of Nuclear Medicine, rely on vast labeled data unlike symbolic AI LinkedIn; Karthik Barma, and degrade on spurious correlations Samuel Tesfazgi et al.. Uncertainty quantification is critical, addressed via dropout as Bayesian approximation Yarin Gal and Zoubin Ghahramani or Bayesian methods providing full predictive distributions Springer, arising from stochastic training Springer. In neuro-symbolic AI, it handles System 1 pattern recognition complemented by symbolic System 2 reasoning Wikipedia. Phenomena like double descent extend bias-variance trade-offs Mikhail Belkin, and memorization is surveyed ACM Computing Surveys.
openrouter/x-ai/grok-4.1-fast 85% confidence
Deep learning represents a core approach in artificial intelligence, as articulated in a 2021 article by Yoshua Bengio, Yann LeCun, and Geoffrey Hinton. Its foundations stem from the backpropagation algorithm developed by David Rumelhart, Geoffrey Hinton, and Ronald J. Williams in the 1980s, which propelled connectionist AI toward modern implementations. Applications span information extraction, named entity recognition, and compound structure prediction from mass spectrometry. Model architectures are shaped by training data characteristics and paradigms like next-token prediction, with multi-task structures enabling stronger generalization in overparameterized regimes. However, many experts view its reasoning as largely unverifiable, prompting integrations like knowledge-guided neural networks, physics-informed frameworks, and efforts reconciling it with symbolic AI. Ongoing research addresses limitations, such as underexplored causal model integration and relational inductive biases via graph networks.

Facts (99)

Sources
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 16 facts
referenceYarin Gal and Zoubin Ghahramani demonstrated in their 2016 paper 'Dropout as a Bayesian approximation: representing model uncertainty in deep learning' that dropout can be used to represent model uncertainty in deep learning.
claimNeuro-symbolic AI offers a promising alternative to conventional deep learning frameworks for addressing challenges related to model robustness, uncertainty quantification, and human intervenability.
referenceL.V. Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun published 'Hands-on Bayesian neural networks—a tutorial for deep learning users' in the IEEE Computational Intelligence Magazine in 2022.
referenceIn deep learning, uncertainty arises significantly from the training procedure, which often involves stochastic optimization, random initializations, and data shuffling, potentially injecting noise and biases into the final model, as cited in reference [58].
referenceBologna and Hayashi (2017) characterize symbolic rules embedded in deep DIMLP networks to address the challenge of transparency in deep learning.
referenceFakour, Mosleh, and Ramezani published a structured review of literature concerning uncertainty in machine learning and deep learning in 2024.
claimNeuro-symbolic approaches, such as the Neuro-Symbolic Program Synthesis (NSPS) model and Prolog-based reasoning systems, integrate symbolic reasoning frameworks with deep learning architectures to enable accurate query resolution across datasets like WikiTableQuestions and Spider.
claimA core theme in neuro-symbolic AI research is the integration of formal logic, probabilistic reasoning, and deep learning into unified architectures.
claimThe integration of probabilistic programming languages with deep learning faces challenges, specifically regarding the efficiency of inference when neural likelihoods are involved and the difficulty of allowing gradients to flow through discrete sampling operations.
claimBayesian methods in deep learning offer a principled approach for modeling and quantifying uncertainty in predictions by generating a full predictive distribution rather than just a single point estimate.
referenceGarnelo, M. and Shanahan, M. explored the reconciliation of deep learning with symbolic artificial intelligence, specifically focusing on the representation of objects and relations, in Current Opinion in Behavioral Sciences.
claimKnowledge-guided neural networks inject logical knowledge into the training and architecture of deep learning models to serve as an inductive bias or constraint, ensuring the network respects specific rules or reasons about its outputs.
referenceThe paper 'Relational inductive biases, deep learning, and graph networks' was authored by Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., and Faulkner, R., and published as an arXiv preprint (arXiv:1806.01261) in 2018.
referenceJiao et al. authored a comprehensive survey on the intersection of causal inference and deep learning.
perspectiveThe authors of the review article suggest that future neuro-symbolic systems will likely involve hybrid architectures that combine formal logic, probabilistic reasoning, and deep learning.
referenceGojić, Vincan, Kundačina, Mišković, and Dragan (2023) examine non-adversarial robustness in deep learning methods applied to computer vision.
Construction of Knowledge Graphs: State and Challenges - arXiv arxiv.org arXiv 7 facts
referenceAl-Aswadi, Chan, and Gan published 'Automatic ontology construction from text: a review from shallow to deep learning trend' in Artificial Intelligence Review in 2020.
perspectiveAl-Aswadi et al. argue that the field of ontology learning needs to transition from shallow learning to deep learning approaches to achieve deeper sentence analysis and improved learning of concepts and relations.
referenceFan et al. (2020) utilized deep learning-based named entity recognition for knowledge graph construction specifically applied to geological hazards.
referenceS. Mudgal, H. Li, T. Rekatsinas, A. Doan, Y. Park, G. Krishnan, R. Deep, E. Arcaute, and V. Raghavendra published 'Deep Learning for Entity Matching: A Design Space Exploration' in the proceedings of the 2018 International Conference on Management of Data (SIGMOD Conference 2018) in Houston, TX, USA, in June 2018.
claimRecent approaches to entity resolution for knowledge graphs utilize multi-source big data techniques, Deep Learning, or knowledge graph embeddings.
claimWhile entity resolution typically operates on semi-structured data, deep learning-based approaches have been developed to address entity resolution in unstructured data sources.
referenceJ. Li, A. Sun, J. Han, and C. Li authored 'A Survey on Deep Learning for Named Entity Recognition,' which was published in IEEE Transactions on Knowledge and Data Engineering.
Understanding LLM Understanding skywritingspress.ca Skywritings Press Jun 14, 2024 7 facts
referenceBengio authored the work titled 'Conscious processing, inductive biases and generalization in deep learning,' which examines the intersection of conscious processing, inductive biases, and generalization in deep learning systems.
claimLarge language models (LLMs) possess the same basic limitations as other deep learning-based systems, specifically struggling to generalize accurately outside of their training distributions and exhibiting a propensity to confabulate.
procedureResearchers at McGill and MILA used deep learning to interpret clinician thinking by pre-training on hundreds of millions of general sentences and applying large language models to over 4,000 free-form health records to distinguish confirmed from suspected autism cases.
claimMikhail Belkin identifies the 'double descent' risk curve as a key statistical phenomenon in deep learning that extends the traditional U-shaped bias-variance trade-off curve beyond the point of interpolation.
claimMikhail Belkin, a Professor at the Halicioglu Data Science Institute at the University of California, San Diego, and an Amazon Scholar, researches the theory and applications of machine learning and data analysis, specifically focusing on statistical phenomena in deep learning.
claimMisha Belkin from UCSD presented on dimensionality and feature learning in Deep Learning and Large Language Models at the 'Understanding LLM Understanding' summer school.
claimRecursive Feature Machines are non-backpropagation-based algorithms that incorporate lessons from deep learning to learn low-dimensional features.
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org arXiv Nov 7, 2024 6 facts
referenceThe paper 'EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case' by Díaz-Rodríguez et al. (2022) describes the X-NeSyL methodology for fusing deep learning with expert knowledge graphs, applied to the MonuMAI cultural heritage use case.
referenceLample and Charton (2019) applied deep learning techniques to symbolic mathematics.
referenceSamuel Kim, Peter Y Lu, Srijon Mukherjee, Michael Gilbert, Li Jing, Vladimir Čeperić, and Marin Soljačić authored 'Integration of neural network-based symbolic regression in deep learning for scientific discovery', published in 2020.
referenceSATNet, developed by Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter in 2019, is a differentiable satisfiability solver designed to bridge deep learning and logical reasoning.
referenceMiles Cranmer, Alvaro Sanchez Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho developed a method for discovering symbolic models from deep learning using inductive biases, published in Advances in Neural Information Processing Systems 33 in 2020.
procedureThe author of the study uses embedding vectors as an intermediate representation to bridge deep learning feature expression and symbolic logic.
Unlocking the Potential of Generative AI through Neuro-Symbolic ... arxiv.org arXiv Feb 16, 2025 6 facts
referenceGuillaume Lample and François Charton authored 'Deep learning for symbolic mathematics,' published as an arXiv preprint (arXiv:1912.01412) in 2019.
claimNeuro-symbolic artificial intelligence (NSAI) is defined as a hybrid approach that combines deep learning's ability to process large-scale, unstructured data with the structured reasoning capabilities of symbolic methods.
quoteYoshua Bengio maintained during the 2019 Montreal AI Debate that 'sequential reasoning can be performed while staying in a deep learning framework.'
referenceMengjia Zhou, Donghong Ji, and Fei Li authored the paper 'Relation extraction in dialogues: A deep learning model based on the generality and specialty of dialogue text', published in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:2015–2026, in 2021.
referenceMiguel Angel Mendez-Lucero, Enrique Bojorquez Gallardo, and Vaishak Belle authored 'Semantic objective functions: A distribution-aware method for adding logical constraints in deep learning', published as an arXiv preprint (arXiv:2405.15789) in 2024.
referenceMaziar Raissi, Paris Perdikaris, and George E. Karniadakis authored 'Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations', published in the Journal of Computational Physics in 2019.
Neuro-symbolic AI - Wikipedia en.wikipedia.org Wikipedia 5 facts
referenceThe 'Neural: Symbolic → Neural' approach relies on symbolic reasoning to generate or label training data that is subsequently learned by a deep learning model, such as using a Macsyma-like symbolic mathematics system to create training examples for a neural model.
claimIn the context of neuro-symbolic AI, deep learning is viewed as best handling System 1 cognition (pattern recognition), while symbolic reasoning is viewed as best handling System 2 cognition (planning, deduction, and deliberative thinking).
referenceLuciano Serafini and Artur d'Avila Garcez authored the 2016 paper 'Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge', published on arXiv.
claimNeuro-symbolic AI is a subfield of artificial intelligence that integrates neural methods, such as neural networks and deep learning, with symbolic methods, such as formal logic, knowledge representation, and automated reasoning.
referenceLuciano Serafini and Artur d'Avila Garcez authored 'Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge', which discusses integrating deep learning with logical reasoning.
Track: Poster Session 3 - aistats 2026 virtual.aistats.org Samuel Tesfazgi, Leonhard Sprandl, Sandra Hirche · AISTATS 4 facts
claimLeveraging optimization structure in linear models allows for significantly faster convergence rates compared to methods proposed in the context of deep learning.
perspectiveJames McInerney and Nathan Kallus argue that uncertainty quantification in deep learning is crucial for safe and reliable decision-making in downstream tasks.
claimMulti-task representation learning is widely used in deep learning applications, including computer vision and natural language processing, due to its generalization performance.
claimDeep learning models often suffer from performance degradation when relying on spurious correlations between input features and labels, leading to poor prediction accuracy for minority groups, particularly when training data are limited or imbalanced.
Global perspectives on energy technology assessment and ... link.springer.com Springer Oct 30, 2025 4 facts
referenceNanjar et al. (2024) performed a systematic literature review of machine learning and deep learning approaches used for energy prediction.
claimMachine learning and deep learning methods, specifically Long Short-term memory (LSTM) and Gated Recurrent unit (GRU), are efficient at capturing and utilizing temporal data sequences and time-series patterns in energy systems.
referenceEl-Azab et al. (2024) evaluated machine learning and deep learning approaches for forecasting electricity prices and assessing energy loads using real datasets.
claimAI can analyze renewable energy policy scenarios, generate models to anticipate long-term impacts of renewable energy integration, and assess climate change risks using machine learning and deep learning functions.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 3 facts
referenceThe paper 'Memorization in deep learning: a survey' was published in ACM Computing Surveys.
claimThe design and selection of deep learning model architectures are influenced by both the latent characteristics of the training data and the training paradigm adopted, such as next-token prediction (NTP) or masked language modeling (MLM).
claimMulti-task structures allow for shorter encoding, which theoretically proves that models can achieve stronger generalization even in the overparameterized state of deep learning.
Neural-Symbolic AI: The Next Breakthrough in Reliable and ... hu.ac.ae Heriot-Watt University Dec 29, 2025 3 facts
referenceNeural-Symbolic AI, defined as the integration of deep learning and symbolic reasoning, is a leading approach for addressing transparency and explainability issues in artificial intelligence (Zhang & Sheng, 2024).
claimDeep learning has been a dominant approach for many artificial intelligence applications since the inception of the field.
claimMany experts consider deep learning to be an approach where the reasoning behind conclusions and predictions is largely unverifiable.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 3 facts
referenceYoshua Bengio, Yann Lecun, and Geoffrey Hinton published an article titled 'Deep learning for ai' in the Communications of the ACM in 2021.
claimDavid Rumelhart, Geoffrey Hinton, and Ronald J. Williams developed the backpropagation algorithm in the 1980s, which significantly advanced connectionist AI and set the stage for modern deep learning.
referenceYoshua Bengio discussed deep learning for System 2 processing in a 2020 AAAI presentation.
Comprehensive framework for smart residential demand side ... nature.com Nature Mar 22, 2025 3 facts
claimHafeez et al. investigated the use of electric vehicle charging stations in demand-side management using deep learning methods, showing that artificial intelligence can optimize energy consumption patterns while maintaining grid reliability.
referenceHafeez et al. investigated the use of deep learning methods for managing electric vehicle charging stations within demand-side management, demonstrating that artificial intelligence can optimize energy consumption patterns while maintaining grid reliability.
referenceHafeez et al. (2023) utilized a deep learning method to manage electric vehicle charging station utilization within demand-side management in IEEE Access.
Knowledge Graphs: Opportunities and Challenges - Springer Nature link.springer.com Springer Apr 3, 2023 2 facts
referenceLiu Q, Jiang H, Evdokimov A et al. published 'Probabilistic reasoning via deep learning: Neural association models' as an arXiv preprint in 2016.
claimNeural network-based methods for knowledge graph embeddings employ deep learning to represent triplets, with representative works including SME, ConvKB, and R-GCN (Dai et al. 2020a).
Papers - Dr Vaishak Belle vaishakbelle.github.io 2 facts
referenceThe paper 'Logic + Reinforcement Learning + Deep Learning: A Survey' by A. Bueff and V. Belle was published in the ICAART proceedings in 2023.
referenceM. Mendez-Lucero and Vaishak Belle authored 'Boolean Connectives and Deep Learning: Three Interpretations', published in the Compendium of Neurosymbolic Artificial Intelligence by IOS Press in 2023.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 2 facts
referenceNgo, Chan, and Mindermann (2022) analyzed the AI alignment problem from the perspective of deep learning.
referenceSheth et al. (2019) explored 'knowledge-infused learning' as a method for enhancing deep learning capabilities.
Medicinal plants and human health: a comprehensive review of ... link.springer.com Springer Nov 5, 2025 2 facts
claimDeep learning approaches can predict compound structures from mass spectrometry data with high accuracy, aiding in metabolite identification and annotation.
claimArtificial intelligence and deep learning technologies accelerate plant research by addressing computational challenges in omics data analysis.
Advancing energy efficiency: innovative technologies and strategic ... oaepublish.com OAE Publishing 1 fact
referenceUllah, A., Haydarov, K., Ul, H. I., et al. published 'Deep learning assisted buildings energy consumption profiling using smart meter data' in the journal Sensors in 2020 (Volume 20, 873).
Zero-knowledge LLM hallucination detection and mitigation through ... amazon.science Amazon Science 1 fact
claimThe Sponsored Products and Brands (SPB) team at Amazon Ads develops solutions involving generative AI, deep learning, multi-objective optimization, and reinforcement learning to improve ad retrieval, auctions, and whole-page relevance.
A Review of State-of-the-Art Deep Learning Models for Knowledge ... ieeexplore.ieee.org IEEE Feb 11, 2026 1 fact
claimDeep Learning has revolutionized the construction and reasoning processes of Knowledge Graphs in the recent past.
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 1 fact
claimHallucinations in artificial intelligence–generated content for nuclear medicine imaging may arise from biased or nondeterministic data, the intrinsic probabilistic nature of deep learning, or limited visual feature understanding by models.
Neurosymbolic AI: The Future of AI After LLMs - LinkedIn linkedin.com Charley Miller · LinkedIn Nov 11, 2025 1 fact
claimNeurosymbolic AI combines statistical deep learning (neural networks) with rules-based symbolic processing (logic, math, and programming languages) to improve deep reasoning and produce artificial general intelligence with common sense.
The Year of Neuro-Symbolic AI: How 2026 Makes Machines Actually ... cogentinfo.com Cogent Infotech Dec 30, 2025 1 fact
claimDeep learning models operate as black boxes, which creates challenges for organizations that need to explain how specific AI decisions emerge, conflicting with modern governance demands.
Not Minds, but Signs: Reframing LLMs through Semiotics - arXiv arxiv.org arXiv Jul 1, 2025 1 fact
claimDespite modern Large Language Models (LLMs) not operating through symbolic logic, the metaphors of cognition have persisted and intensified with the rise of deep learning, with traces of the 'mind-as-machine' metaphor surviving in recent neural approaches.
Building Better Agentic Systems with Neuro-Symbolic AI cutter.com Cutter Consortium Dec 10, 2025 1 fact
claimDeep learning neural network-based large language models, such as GPT-4, Claude, and Gemini, process unstructured data including text, images, video, and streaming sensor data to learn patterns, classify data, and make predictions.
The psychological mechanisms through which digital content ... frontiersin.org Frontiers Nov 12, 2025 1 fact
referenceSaputra and Kumar (2025) utilized deep learning and transformer models to detect emotions in railway complaints as a data mining approach to analyzing public sentiment on Twitter.
Recent breakthroughs in the valorization of lignocellulosic biomass ... pubs.rsc.org Nilanjan Dey, Shakshi Bhardwaj, Pradip K. Maji · RSC Sustainability Jun 7, 2025 1 fact
claimA system combining deep learning and building information modeling (BIM) was developed to detect underwater cracks in concrete structures.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog Dec 4, 2024 1 fact
claimEnterprise customers require a GenAI stack that is modular, reusable, reproducible, trustworthy, includes lineage and traceability, and decouples machine learning, deep learning, and GenAI tasks while grounding them in quality data.
A Synergistic Workspace for Human Consciousness Revealed by ... elifesciences.org eLife 1 fact
referenceThe study 'Deep learning and the Global Workspace Theory' was published in Trends in Neurosciences.
Neurosymbolic AI: The Future of Artificial Intelligence - LinkedIn linkedin.com Karthik Barma · LinkedIn May 24, 2024 1 fact
claimSymbolic AI can operate with smaller datasets by leveraging existing knowledge bases and rules, addressing the limitation that deep learning models require vast amounts of labelled data.
Construction and Evaluation of an "AI+Knowledge Graph" Teaching ... researchsquare.com Research Square 1 fact
procedureThe Computer Vision Technology System used in the 'AI+Knowledge Graph' teaching model includes a Comprehension Analysis Module that uses real-time facial video capture and deep learning-based expression recognition algorithms to analyze eyebrow-eye angles and lip state changes to identify student emotional states like confusion, comprehension, and concentration.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 1 fact
claimFrank Rosenblatt developed the Perceptron in 1958, which served as an early foundation for modern deep learning.
Consciousness in Artificial Intelligence? A Framework for Classifying ... arxiv.org arXiv Nov 20, 2025 1 fact
claimThe concept of learning representations is central to deep learning and current AI models, as evidenced by the existence of the International Conference on Learning Representations (ICLR).
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 1 fact
referenceWang et al. (2023) published 'Financial fraud detection based on deep learning: Towards large-scale pre-training transformer models' in the China Conference on Knowledge Graph and Semantic Computing.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
referenceThe paper 'Design of legal judgment prediction on knowledge graph and deep learning' was published in the 2024 IEEE 2nd International Conference on Image Processing and Computer Applications (ICIPCA).
Combining large language models with enterprise knowledge graphs frontiersin.org Frontiers Aug 26, 2024 1 fact
referenceQian et al. (2020) proposed a method for disambiguating entity names using non-annotated examples, Distant Supervision (DS) to generate pseudo labels, and active learning to address deep learning model data requirements, which involves ranking predictions by model confidence and involving users in labeling top and bottom elements.
Construction of intelligent decision support systems through ... - Nature nature.com Nature Oct 10, 2025 1 fact
claimThe joint integration of causal models and deep learning to provide multi-level, contextually appropriate explanations is largely underexplored, particularly for causal reasoning applications.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org arXiv Oct 23, 2025 1 fact
referenceYang Yang, Zhilei Wu, Yuexiang Yang, Shuangshuang Lian, Fengjie Guo, and Zhiwei Wang authored a survey of information extraction based on deep learning.
Knowledge graphs - Amazon Science amazon.science Amazon Science 1 fact
procedureThe responsibilities of an Applied Scientist on the Sponsored Products and Brands Off-Search team include designing and developing solutions using GenAI, deep learning, multi-objective optimization, and reinforcement learning to improve ad retrieval, auctions, and whole-page relevance.
What Is Open Source Software? - IBM ibm.com IBM 1 fact
claimIT professionals commonly deploy open source software in categories including programming languages and frameworks, databases and data technologies, operating systems, Git-based public repositories, and frameworks for artificial intelligence, machine learning, and deep learning.
Engineering biology applications for environmental solutions - Nature nature.com Nature Apr 14, 2025 1 fact
referenceNielsen and Voigt utilized deep learning to predict the laboratory of origin for engineered DNA in a 2018 study published in Nature Communications.