Relations (1)
Facts (3)
Sources
Neuro-Symbolic AI: Explainability, Challenges, and Future Trends arxiv.org 1 fact
claimVerifying and updating knowledge within Large Language Models (LLMs) remains an open research topic.
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com 1 fact
referenceThe paper "Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity" by Wang et al. (2023) provides a survey on the state of factuality in large language models, covering aspects of knowledge, retrieval, and domain-specificity.
Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org 1 fact
claimExplicitly linking reasoning steps to graph structure offers a more interpretable view of how large language models navigate knowledge.