Kavli Affiliate: Jing Wang
| First 5 Authors: Zixue Zeng, Xiaoyan Zhao, Matthew Cartier, Tong Yu, Jing Wang
| Summary:
We introduce a novel segmentation-aware joint training framework called
generative reinforcement network (GRN) that integrates segmentation loss
feedback to optimize both image generation and segmentation performance in a
single stage. An image enhancement technique called segmentation-guided
enhancement (SGE) is also developed, where the generator produces images
tailored specifically for the segmentation model. Two variants of GRN were also
developed, including GRN for sample-efficient learning (GRN-SEL) and GRN for
semi-supervised learning (GRN-SSL). GRN’s performance was evaluated using a
dataset of 69 fully annotated 3D ultrasound scans from 29 subjects. The
annotations included six anatomical structures: dermis, superficial fat,
superficial fascial membrane (SFM), deep fat, deep fascial membrane (DFM), and
muscle. Our results show that GRN-SEL with SGE reduces labeling efforts by up
to 70% while achieving a 1.98% improvement in the Dice Similarity Coefficient
(DSC) compared to models trained on fully labeled datasets. GRN-SEL alone
reduces labeling efforts by 60%, GRN-SSL with SGE decreases labeling
requirements by 70%, and GRN-SSL alone by 60%, all while maintaining
performance comparable to fully supervised models. These findings suggest the
effectiveness of the GRN framework in optimizing segmentation performance with
significantly less labeled data, offering a scalable and efficient solution for
ultrasound image analysis and reducing the burdens associated with data
annotation.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3