Kavli Affiliate: Jing Wang
| First 5 Authors: Jiangyun Li, Wenxuan Wang, Chen Chen, Tianxiang Zhang, Sen Zha
| Summary:
Transformer, benefiting from global (long-range) information modeling using
self-attention mechanism, has been successful in natural language processing
and computer vision recently. Convolutional Neural Networks, capable of
capturing local features, are difficult to model explicit long-distance
dependencies from global feature space. However, both local and global features
are crucial for dense prediction tasks, especially for 3D medical image
segmentation. In this paper, we present the further attempt to exploit
Transformer in 3D CNN for 3D medical image volumetric segmentation and propose
a novel network named TransBTSV2 based on the encoder-decoder structure.
Different from TransBTS, the proposed TransBTSV2 is not limited to brain tumor
segmentation (BTS) but focuses on general medical image segmentation, providing
a stronger and more efficient 3D baseline for volumetric segmentation of
medical images. As a hybrid CNN-Transformer architecture, TransBTSV2 can
achieve accurate segmentation of medical images without any pre-training,
possessing the strong inductive bias as CNNs and powerful global context
modeling ability as Transformer. With the proposed insight to redesign the
internal structure of Transformer block and the introduced Deformable
Bottleneck Module to capture shape-aware local details, a highly efficient
architecture is achieved with superior performance. Extensive experimental
results on four medical image datasets (BraTS 2019, BraTS 2020, LiTS 2017 and
KiTS 2019) demonstrate that TransBTSV2 achieves comparable or better results
compared to the state-of-the-art methods for the segmentation of brain tumor,
liver tumor as well as kidney tumor. Code will be publicly available at
https://github.com/Wenxuan-1119/TransBTS.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=10