Relations (1)
Facts (3)
Sources
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org 3 facts
referenceThe KG-CoT method, proposed by Zhao et al. in 2024, uses chain-of-thought-based joint reasoning between knowledge graphs and LLMs (GPT-4, GPT-3.5-Turbo, Llama-7B, Llama-13B) to perform KBQA and multi-hop QA tasks, evaluated using Acc and Hit@K metrics on WQSP, CWQ, SQ, and WQ datasets.
referenceThe KG-Agent method, proposed by Jiang et al. in 2024, uses KG-Agent-based instruction tuning with Davinci-003, GPT-4, and Llama-2-7B models to perform KGQA and ODQA tasks, evaluated using Hits@1 and F1 metrics on WQSP, CWQ, and GrailQA datasets.
referenceThe ToG method, proposed by Sun et al. in 2024, uses beam-search-based retrieval and LLM agents with GPT-3.5-Turbo, GPT-4, and Llama-2-70B-Chat models to perform KBQA and open-domain QA tasks, evaluated using Hits@1 on CWQ, WQSP, GrailQA, QALD10-en, and WQ datasets.