Speaker
Description
Fluorescence microscopy is an indispensable tool to visualize spatio-temporal biological mechanisms at sub-micron resolution across all areas of life sciences. However, microscopists still face a difficult trade-off between imaging resolution, throughput, light sensitivity, and scale. In practice, balancing these factors often comes down to limiting the scope of investigation: for example, while STED offers sub-diffraction limit resolution, its photodamage makes volumetric imaging infeasible. Previous deep learning-based methods to overcome these barriers required static modeling of the imaging system or matched pairs from both source and target modalities. To address these challenges, we present a deep generative modeling approach that achieves high-definition volumetric reconstruction with resolution matching the target modality, using only two-dimensional projection images. We demonstrate image resolution enhancement between not only different imaging conditions but also different imaging modalities. We also provide biological validations showing how our method reveals previously unresolved details while effectively bypassing the trade-offs. We expect our method to be a valuable tool for enhancing various downstream tasks in image analysis for high-throughput imaging.
Authors | Hyoungjun Park*, Lucia Pigazzini, Niccolò Banterle, Anna Kreshuk |
---|---|
Keywords | Fluorescence microscopy, Super-resolution, Deep generative modelling, Cross-modality |