Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 82% Match 1 month ago

Integrative Variational Autoencoders for Generative Modeling of an Image Outcome with Multiple Input Images

generative-ai › vae
📄 Abstract

Abstract: Understanding relationships across multiple imaging modalities is central to neuroimaging research. We introduce the Integrative Variational Autoencoder (InVA), the first hierarchical VAE framework for image-on-image regression in multimodal neuroimaging. Unlike standard VAEs, which are not designed for predictive integration across modalities, InVA models outcome images as functions of both shared and modality-specific features. This flexible, data-driven approach avoids rigid assumptions of classical tensor regression and outperforms conventional VAEs and nonlinear models such as BART. As a key application, InVA accurately predicts costly PET scans from structural MRI, offering an efficient and powerful tool for multimodal neuroimaging.