Physically-Based Inverse Rendering Framework for PET Image Reconstruction

Kavli Affiliate: David A. Muller

| First 5 Authors: Yixin Li, Yixin Li, , ,

| Summary:

Differentiable rendering has been widely adopted in computer graphics as a
powerful approach to inverse problems, enabling efficient gradient-based
optimization by differentiating the image formation process with respect to
millions of scene parameters. Inspired by this paradigm, we propose a
physically-based inverse rendering (IR) framework, the first ever platform for
PET image reconstruction using Dr.Jit, for PET image reconstruction. Our method
integrates Monte Carlo sampling with an analytical projector in the forward
rendering process to accurately model photon transport and physical process in
the PET system. The emission image is iteratively optimized using voxel-wise
gradients obtained via automatic differentiation, eliminating the need for
manually derived update equations. The proposed framework was evaluated using
both phantom studies and clinical brain PET data acquired from a Siemens
Biograph mCT scanner. Implementing the Maximum Likelihood Expectation
Maximization (MLEM) algorithm across both the CASToR toolkit and our IR
framework, the IR reconstruction achieved a higher signal-to-noise ratio (SNR)
and improved image quality compared to CASToR reconstructions. In clinical
evaluation compared with the Siemens Biograph mCT platform, the IR
reconstruction yielded higher hippocampal standardized uptake value ratios
(SUVR) and gray-to-white matter ratios (GWR), indicating enhanced tissue
contrast and the potential for more accurate tau localization and Braak staging
in Alzheimer’s disease assessment. The proposed IR framework offers a
physically interpretable and extensible platform for high-fidelity PET image
reconstruction, demonstrating strong performance in both phantom and real-world
scenarios.

| Search Query: ArXiv Query: search_query=au:”David A. Muller”&id_list=&start=0&max_results=3

Read More