“We propose Mind-Video, which progressively learns spatiotemporal information from continuous fMRI data through masked brain modeling + multimodal contrastive learning + spatiotemporal attention + co-training with an augmented Stable Diffusion model that incorporates network temporal inflation.” https://mind-video.com/
“Reconstructing human vision from brain activities has been an appealing task that helps to understand our cognitive process. Even though recent research has seen great success in reconstructing static images from non-invasive brain recordings, work on recovering continuous visual experiences in the form of videos is limited. In this work, we propose Mind-Video that learns spatiotemporal information from continuous fMRI data of the cerebral cortex progressively through masked brain modeling, multimodal contrastive learning with spatiotemporal attention, and co-training with an augmented Stable Diffusion model that incorporates network temporal inflation. We show that high-quality videos of arbitrary frame rates can be reconstructed with Mind-Video using adversarial guidance. The recovered videos were evaluated with various semantic and pixel-level metrics. We achieved an average accuracy of 85% in semantic classification tasks and 0.19 in structural similarity index (SSIM), outperforming the previous state-of-the-art by 45%. We also show that our model is biologically plausible and interpretable, reflecting established physiological processes.” https://arxiv.org/pdf/2305.11675.pdf
See also Nishimoto et al 2011, ‘Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies‘, Current Biology; c2012 ‘Movie reconstruction from human brain activity‘, Jack Gallant (youtube 0.28); 2012 ‘Mary Lou Jepsen on imaging the mind’s eye‘, X the moonshot factory (youtube 11:36); and Cognitive, Systems and Computational Neuroscience | GallantLab.org