nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance

Kavli Affiliate: Jing Wang

| First 5 Authors: Yunxiang Li, Bowen Jing, Zihan Li, Jing Wang, You Zhang

| Summary:

Automatic segmentation of medical images is crucial in modern clinical
workflows. The Segment Anything Model (SAM) has emerged as a versatile tool for
image segmentation without specific domain training, but it requires human
prompts and may have limitations in specific domains. Traditional models like
nnUNet perform automatic segmentation during inference and are effective in
specific domains but need extensive domain-specific training. To combine the
strengths of foundational and domain-specific models, we propose nnSAM,
integrating SAM’s robust feature extraction with nnUNet’s automatic
configuration to enhance segmentation accuracy on small datasets. Our nnSAM
model optimizes two main approaches: leveraging SAM’s feature extraction and
nnUNet’s domain-specific adaptation, and incorporating a boundary shape
supervision loss function based on level set functions and curvature
calculations to learn anatomical shape priors from limited data. We evaluated
nnSAM on four segmentation tasks: brain white matter, liver, lung, and heart
segmentation. Our method outperformed others, achieving the highest DICE score
of 82.77% and the lowest ASD of 1.14 mm in brain white matter segmentation with
20 training samples, compared to nnUNet’s DICE score of 79.25% and ASD of 1.36
mm. A sample size study highlighted nnSAM’s advantage with fewer training
samples. Our results demonstrate significant improvements in segmentation
performance with nnSAM, showcasing its potential for small-sample learning in
medical image segmentation.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More