claim
The authors characterize the Jaccard-like Index values of 0.272 and 0.347 as corresponding to a moderate level of agreement within the context of complex medical-text evaluation, consistent with previously reported ranges of 0.25–0.40 for similar clinical annotation tasks.

Authors

Sources

Referenced by nodes (1)