Kavli Affiliate: Cheng Peng
| First 5 Authors: Cheng Peng, Haofu Liao, Gina Wong, Jiebo Luo, Shaohua Kevin Zhou
| Summary:
A radiograph visualizes the internal anatomy of a patient through the use of
X-ray, which projects 3D information onto a 2D plane. Hence, radiograph
analysis naturally requires physicians to relate the prior about 3D human
anatomy to 2D radiographs. Synthesizing novel radiographic views in a small
range can assist physicians in interpreting anatomy more reliably; however,
radiograph view synthesis is heavily ill-posed, lacking in paired data, and
lacking in differentiable operations to leverage learning-based approaches. To
address these problems, we use Computed Tomography (CT) for radiograph
simulation and design a differentiable projection algorithm, which enables us
to achieve geometrically consistent transformations between the radiography and
CT domains. Our method, XraySyn, can synthesize novel views on real radiographs
through a combination of realistic simulation and finetuning on real
radiographs. To the best of our knowledge, this is the first work on radiograph
view synthesis. We show that by gaining an understanding of radiography in 3D
space, our method can be applied to radiograph bone extraction and suppression
without groundtruth bone labels.
| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=10