reference
The paper 'Rewire-then-probe: a contrastive recipe for probing biomedical knowledge of pre-trained language models' by Meng, Z., Liu, F., Shareghi, E., Su, Y., Collins, C., Collier, N. introduces a contrastive method for probing biomedical knowledge in pre-trained language models.
Authors
Sources
- Practices, opportunities and challenges in the fusion of knowledge ... www.frontiersin.org via serper
Referenced by nodes (1)
- pre-trained language models concept