Diffusion-based multimodal medical image fusion
-
-
Abstract
In recent years, with the continuous development of medical imaging technology, image fusion techniques have been widely applied in medical image analysis. Traditional fusion methods are limited by manually designed feature extraction processes, resulting in limited accuracy in understanding and matching of image semantic information, and inability to fully utilize the information from multimodal images. A diffusion-based multimodal image fusion method is investigated in this paper. This method progressively learns the joint features of multi-channel images in the latent space using a diffusion model to overcome the limited learning capability of single end-to-end networks. And it generates high-quality fused images and improve the reverse denoising process specifically for the task of multimodal medical image fusion. Two modal discriminators are incorporated to enhance the denoising network’s understanding of modality-specific features, fully leveraging the complementary information between different imaging modalities. Experiments on the AANLIB dataset demonstrate that the proposed method achieves satisfactory fusion results.
-
-