Kavli Affiliate: Jing Wang | First 5 Authors: Jiangyun Li, Wenxuan Wang, Chen Chen, Tianxiang Zhang, Sen Zha | Summary: Transformer, benefiting from global (long-range) information modeling using self-attention mechanism, has been successful in natural language processing and computer vision recently. Convolutional Neural Networks, capable of capturing local features, are difficult to model explicit long-distance […]
Continue.. TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images