Handling Variance of Pretrained Language Models in Grading Evidence in the Medical Literature
Published:
Fajri Koto* and Biaoyan Fang* (2021) Handling Variance of Pretrained Language Models in Grading Evidence in the Medical Literature. In Proceedings of the Australasian Language Technology Association Workshop 2021 (ALTA 2021), virtual.
@inproceedings{keto-etal-2021-alta,
title = "Handling Variance of Pretrained Language Models in Grading Evidence in the Medical Literature",
author = "Keto, Fajri* and
Fang, Biaoyan*",
booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2021 (ALTA 2021)",
year = "2021",
address = "Online",
abstract = "In this paper, we investigate the utility of modern pretrained language models for the evidence grading system in the medical literature based on the ALTA 2021 shared task. We benchmark 1) domain-specific models that are optimized for medical literature and 2) domain-generic models with rich latent discourse representation (i.e. ELECTRA, RoBERTa). Our empirical experiments reveal that these modern pretrained language models suffer from high variance, and the ensemble method can improve the model performance. We found that ELECTRA performs best with an accuracy of 53.6% on the test set, outperforming domain-specific models.",
}
Abstract
In this paper, we investigate the utility of modern pretrained language models for the evidence grading system in the medical literature based on the ALTA 2021 shared task. We benchmark 1) domain-specific models that are optimized for medical literature and 2) domain-generic models with rich latent discourse representation (i.e. ELECTRA, RoBERTa). Our empirical experiments reveal that these modern pretrained language models suffer from high variance, and the ensemble method can improve the model performance. We found that ELECTRA performs best with an accuracy of 53.6% on the test set, outperforming domain-specific models.