"Motion Correction Strategies in Dynamic PET" by Xueqi Guo

Date of Award

Spring 2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Biomedical Engineering (ENAS)

First Advisor

Liu, Chi

Abstract

Dynamic positron emission tomography (PET) is more desirable in oncologic and cardiovascular measurements with tracer dynamics compared with static PET. Parametric imaging can extract more informative physiological parameters from whole-body dynamic PET images using compartmental kinetic modeling. Absolute myocardial blood flow (MBF) estimation from cardiac dynamic PET provides incremental information on heart function that cannot be determined from conventional relative myocardial perfusion imaging. However, during the relatively long time period of dynamic PET scanning, the inevitable subject motion in dynamic PET introduces inter-frame mismatch, seriously impacting parametric imaging and MBF quantification. Additionally, the tracer distribution change across the dynamic frames brings significant challenges to motion correction. External motion tracking systems have been implemented in recent studies, but the required additional hardware setup time holds back its clinical application. Data-driven joint motion estimation and correction frameworks without additional devices are preferred, but these approaches require list mode data and have not yet been fully explored in dynamic PET with the rapidly changing radiotracer distribution. This thesis presents advanced conventional and deep learning-based frame registration methods addressing the challenge of significant tracer distribution difference to correct inter-frame mismatch and improve the following parametric quantification. Specifically, for whole-body 2-deoxy-2-[18F]fluoro-D-glucose (FDG) scans, we first proposed a conventional non-rigid frame registration method with intensity cutoff. This method could successfully align the dynamic frames and reduce the fitting error in the following parametric image. Second, to further improve registration performance and reduce time consumption, we proposed an unsupervised deep learning-based method with spatial-temporal analysis. This well-trained model is able to significantly improve motion estimation accuracy and achieve inference ~460 times faster than the conventional method. To directly optimize tracer kinetics and parametric fitting during motion correction, we next proposed a novel loss penalization term directly regularizing Patlak fitting in addition to the image similarity loss. With Patlak loss, the improved motion correction results and generalization ability are demonstrated. For cardiac rubidium-82 (82Rb) scans, to address the rapid tracer kinetics challenge, we proposed a generative method with both temporal and anatomical guidance to convert early frames to the frame with similar tracer distribution as the last reference frame so that the current motion correction methods will be directly applicable. This network is able to translate early frames to results with high similarity to the real last frame. Applied to frame conversion results, the performances of both current conventional and deep learning-based registration methods are enhanced. Lastly, we discuss potential extensions including deep learning-based intra-frame motion correction and a generative method of SUV uptake time correction. Overall, this dissertation presented multiple novel approaches for inter-frame motion correction addressing the rapid tracer kinetics challenge and improved the subsequent parametric quantification.

Share

COinS