reference
BERTRL, proposed by Zha et al. in 2022, leverages pre-trained language models and fine-tunes them using relation instances and reasoning paths as training samples.

Authors

Sources

Referenced by nodes (2)