Relations (1)

related 0.40 — supporting 4 facts

Large Language Models are evaluated on their ability to perform multi-hop reasoning in specialized domains like medicine {fact:1, fact:3}, and are actively being enhanced for these tasks through frameworks like KG-Agent that integrate multi-hop reasoning processes into their training [1].

Facts (4)

Sources
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Academic Journal of Science and Technology 1 fact
claimIn specialized domains such as law, medicine, and science, text generation by Large Language Models (LLMs) often suffers from a lack of coherence and logical consistency, particularly when tasks require multi-hop reasoning and analysis.
Bridging the Gap Between LLMs and Evolving Medical Knowledge arxiv.org arXiv 1 fact
measurementIn the second phase of MKG evaluation, expert LLMs achieved an 89% accuracy rate when answering complex medical queries requiring multi-hop reasoning, such as managing comorbidities or determining multi-drug treatment protocols.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org Frontiers 1 fact
referenceKG-Agent, proposed by Jiang J. et al. in 2024, utilizes programming languages to design multi-hop reasoning processes on knowledge graphs and synthesizes code-based instruction datasets for fine-tuning base LLMs.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 1 fact
claimRAG-based question answering systems face three primary technical challenges: (1) knowledge conflicts arising from inconsistent or overlapping data between LLMs and external sources, (2) poor relevance and quality of retrieved context which directly impacts answer accuracy, and (3) a lack of iterative and multi-hop reasoning capabilities required for questions needing global or summarized contexts.