reference
The LP-BERT model (Li et al., 2023) employs a multi-task learning approach that simultaneously learns contextual and semantic information by sharing an input format across three tasks: Masked Language Model (MLM), Masked Entity Model (MEM), and Masked Relation Model (MRM).

Authors

Sources

Referenced by nodes (1)