Kavli Affiliate: Cheng Peng
| First 5 Authors: Cheng Peng, Zehao Yu, Kaleb E Smith, Wei-Hsuan Lo-Ciganic, Jiang Bian
| Summary:
The progress in natural language processing (NLP) using large language models
(LLMs) has greatly improved patient information extraction from clinical
narratives. However, most methods based on the fine-tuning strategy have
limited transfer learning ability for cross-domain applications. This study
proposed a novel approach that employs a soft prompt-based learning
architecture, which introduces trainable prompts to guide LLMs toward desired
outputs. We examined two types of LLM architectures, including encoder-only
GatorTron and decoder-only GatorTronGPT, and evaluated their performance for
the extraction of social determinants of health (SDoH) using a
cross-institution dataset from the 2022 n2c2 challenge and a cross-disease
dataset from the University of Florida (UF) Health. The results show that
decoder-only LLMs with prompt tuning achieved better performance in
cross-domain applications. GatorTronGPT achieved the best F1 scores for both
datasets, outperforming traditional fine-tuned GatorTron by 8.9% and 21.8% in a
cross-institution setting, and 5.5% and 14.5% in a cross-disease setting.
| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3