Speaker
Description
The 3D morphology of the cell nucleus is traditionally studied through high-resolution fluorescence imaging, which can be costly, time-intensive, and have phototoxic effects on cells. These constraints have spurred the development of computational ""virtual staining"" techniques that predict the fluorescence signal from transmitted-light images, offering a non-invasive and cost-effective alternative.
However, suitability of virtual staining for 3D nuclear morphological analysis hasn’t been evaluated. Due to being typically trained with pixel-wise loss functions, virtual staining models learn to predict contents of whole fluorescence images, including noise, background, and imaging artifacts, making 3D segmentation of virtually-stained nuclei challenging.
To address this, we introduced a foreground-aware component to training of virtual staining models. Specifically, we threshold fluorescence target images and direct the model to accurately learn foreground pixels. We also soft-threshold predictions using a tunable sigmoid function and calculate the Dice loss between target and predicted foreground areas, effectively balancing foreground pixel-level accuracy with locations and morphological properties of nuclei.
Our evaluations indicate that our model predicts cleaner foreground images, excels in representing the morphology of nuclei, and improves segmentation and featurization results.
Authors | Alexandr Kalinin, Paula Llanos*, Barbara Diaz-Rohrer, Theresa Maria Sommer, Giovanni Sestini, Xinhai Hou, Ivo Dinov, Xiang Wan, Brian Athey, Jonathan Sexton, Nicolas Rivron, Anne Carpenter, Beth Cimini, Shantanu Singh, Matthew O’Meara |
---|---|
Keywords | virtual staining, in silico staining, nuclear morphology, 3d shape analysis |