concept

knowledge question answering

Also known as: knowledge question answering, knowledge question answering tasks

Facts (11)

Sources
The construction and refined extraction techniques of knowledge ... nature.com Nature Feb 10, 2026 11 facts
measurementExcluding Retrieval-Augmented Generation (RAG) from the knowledge graph construction framework resulted in a BERTScore drop to 0.89 in knowledge question answering tasks.
measurementThe LoRA fine-tuned model achieved an overall score 11.9% higher than GPT-4 in knowledge question answering tasks.
procedureThe ablation study in the paper evaluates the contribution of individual components in the proposed framework by systematically removing or disabling key modules and measuring performance on knowledge question answering, tactical planning, and threat assessment tasks.
claimThe researchers constructed evaluation datasets for three specific tasks: knowledge question answering, tactical planning, and threat assessment, ensuring domain complexity aligns with real-world scenarios.
measurementIn knowledge question answering, non-desensitized data achieves a BERTScore of 0.97, while desensitized data achieves 0.96.
procedureThe experimental evaluation of the DeepSeek-R1 70B LoRA model uses BERTScore for knowledge question answering, an overall score for tactical planning, Kendall’s Tau for threat assessment, and privacy scores of k-anonymity ≥ 5 and l-diversity ≥ 2.
measurementIn knowledge question answering tasks, the LoRA fine-tuned model achieved a BERTScore of 0.96, while GPT-4 achieved a BERTScore of 0.85.
claimThe LoRA fine-tuned model outperforms other comparative models in knowledge question answering and tactical planning tasks.
claimThe desensitization strategy, which applies generalization and masking techniques to raw data, preserves semantic integrity in knowledge question answering, tactical planning, and threat assessment tasks with minimal performance loss compared to non-desensitized approaches.
measurementIn knowledge question answering tasks, the LoRA fine-tuned model achieved an overall score of 0.94, while GPT-4 achieved a score of 0.84.
claimThe knowledge question answering dataset is derived from regulations documents, covering areas such as the applicability of tactical rules and equipment usage standards, with answers sourced from standardized documents and verified by experts.