Clinical Concept and Relation Extraction Using Prompt-based Machine Reading Comprehension

Kavli Affiliate: Cheng Peng

| Authors: Cheng Peng, Xi Yang, Zehao Yu, Jiang Bian, William R. Hogan, Yonghui Wu

| Summary:

Objective: To develop a natural language processing system that solves both
clinical concept extraction and relation extraction in a unified prompt-based
machine reading comprehension (MRC) architecture with good generalizability for
cross-institution applications.
Methods: We formulate both clinical concept extraction and relation
extraction using a unified prompt-based MRC architecture and explore
state-of-the-art transformer models. We compare our MRC models with existing
deep learning models for concept extraction and end-to-end relation extraction
using two benchmark datasets developed by the 2018 National NLP Clinical
Challenges (n2c2) challenge (medications and adverse drug events) and the 2022
n2c2 challenge (relations of social determinants of health [SDoH]). We also
evaluate the transfer learning ability of the proposed MRC models in a
cross-institution setting. We perform error analyses and examine how different
prompting strategies affect the performance of MRC models.
Results and Conclusion: The proposed MRC models achieve state-of-the-art
performance for clinical concept and relation extraction on the two benchmark
datasets, outperforming previous non-MRC transformer models. GatorTron-MRC
achieves the best strict and lenient F1-scores for concept extraction,
outperforming previous deep learning models on the two datasets by 1%~3% and
0.7%~1.3%, respectively. For end-to-end relation extraction, GatorTron-MRC and
BERT-MIMIC-MRC achieve the best F1-scores, outperforming previous deep learning
models by 0.9%~2.4% and 10%-11%, respectively. For cross-institution
evaluation, GatorTron-MRC outperforms traditional GatorTron by 6.4% and 16% for
the two datasets, respectively. The proposed method is better at handling
nested/overlapped concepts, extracting relations, and has good portability for
cross-institute applications.

Read More