Kavli Affiliate: Jing Wang
| First 5 Authors: Yunxiang Li, Bowen Jing, Xiang Feng, Zihan Li, Yongbo He
| Summary:
The recent developments of foundation models in computer vision, especially
the Segment Anything Model (SAM), allow scalable and domain-agnostic image
segmentation to serve as a general-purpose segmentation tool. In parallel, the
field of medical image segmentation has benefited significantly from
specialized neural networks like the nnUNet, which is trained on
domain-specific datasets and can automatically configure the network to tailor
to specific segmentation challenges. To combine the advantages of foundation
models and domain-specific models, we present nnSAM, which synergistically
integrates the SAM model with the nnUNet model to achieve more accurate and
robust medical image segmentation. The nnSAM model leverages the powerful and
robust feature extraction capabilities of SAM, while harnessing the automatic
configuration capabilities of nnUNet to promote dataset-tailored learning. Our
comprehensive evaluation of nnSAM model on different sizes of training samples
shows that it allows few-shot learning, which is highly relevant for medical
image segmentation where high-quality, annotated data can be scarce and costly
to obtain. By melding the strengths of both its predecessors, nnSAM positions
itself as a potential new benchmark in medical image segmentation, offering a
tool that combines broad applicability with specialized efficiency. The code is
available at https://github.com/Kent0n-Li/Medical-Image-Segmentation.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3