Automatic Summarization of Doctor-Patient Encounter Dialogues Using Large Language Model through Prompt Tuning

Kavli Affiliate: Cheng Peng

| First 5 Authors: Mengxian Lyu, Cheng Peng, Xiaohan Li, Patrick Balian, Jiang Bian

| Summary:

Automatic text summarization (ATS) is an emerging technology to assist
clinicians in providing continuous and coordinated care. This study presents an
approach to summarize doctor-patient dialogues using generative large language
models (LLMs). We developed prompt-tuning algorithms to instruct generative
LLMs to summarize clinical text. We examined the prompt-tuning strategies, the
size of soft prompts, and the few-short learning ability of GatorTronGPT, a
generative clinical LLM developed using 277 billion clinical and general
English words with up to 20 billion parameters. We compared GatorTronGPT with a
previous solution based on fine-tuning of a widely used T5 model, using a
clinical benchmark dataset MTS-DIALOG. The experimental results show that the
GatorTronGPT- 20B model achieved the best performance on all evaluation
metrics. The proposed solution has a low computing cost as the LLM parameters
are not updated during prompt-tuning. This study demonstrates the efficiency of
generative clinical LLMs for clinical ATS through prompt tuning.

| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3

Read More