I2K 2024 Conference - From Images to Knowledge

Europe/Rome
Human Technopole (Milan, Italy)

Human Technopole

Milan, Italy

Description

About the Conference 

The aim of the biennial I2K Conference is to bring together developers, researchers, and students working in the field of computational image analysis from all over the world. As life science studies increasingly rely on microscopy images, learning to perform quantitative analysis on these data to reach sound conclusions is a key asset. Over the years, I2K Conference has become a flagship event for all those scientists willing to dive deep into the topic through interactive tutorials covering cutting-edge open-source solutions for biological image reconstruction and analysis. The 2024 edition will be held in person at Human Technopole (Milan, Italy). You can have a look at the previous editions of I2K by visiting this webpage.

The conference will consist of on-site talk sessions, parallel hands-on workshops, and two poster sessions. More information about the programme coming soon. Stay tuned!


Target Audience

The Conference is conceived for beginners, intermediate, and advanced users and/or developers in the field of computational image analysis working in the life sciences.

 

I2K 2024 Conference Sponsor

Abstract submission

Call for Abstracts Deadline

Poster/Oral Notification

Call for Workshops

 CLOSED 19 August 2024  CLOSED

 

Registration (Link to registration form)


Registration Opening 

Registration Deadline

Payment Deadline


6 August 2024

CLOSED

15 September 2024

 

IMPORTANT

❗ We are processing payments. If you did pay but you still see "awayting payment" on your account, do not worry. This will be processed soon.

If you did not pay, please make sure to do that by September 20, 2024.

Registration fees

Registration includes: 

  • Participation in all lectures, workshop sessions, and poster sessions
  • Meals and coffee breaks provided during the conference

PhD Student / Postdoc 

 

Academic

 

Industry / Commercial

250 €

 

450 €

 

750 €


 

Call for Sponsorship

If you are a company and you would like to sponsor the I2K Conference, have a look at our dedicated Call for Sponsorship for this event
 

CALL FOR SPONSORSHIPS DEADLINE
31 August 2024


Venue

Fondazione Human Technopole
Palazzo Italia
Viale Rita Levi-Montalcini, 1
Area MIND
20157 Milan, Italy
Phone: +39 02 30 24 70 01


Transport

Trains

Milano Centrale train station is the central node for the national transport infrastructure and it directly connects Italy to other European cities. You might consider reaching Milan by train if you leave from Paris, Zurich, or Frankfurt as there are direct connections available for these solutions.

Airports

Milano Linate is the closest to the city and has different types of connections with the centre of Milan. From Linate airport, you can take bus number 73 (direction Duomo M1/M3) with an ordinary urban ticket Mi1-Mi3 (see GiroMilano ATM, Azienda Trasporti Milanesi) and reach the city center (Duomo) in roughly 50 minutes (please note that the bus is not convenient if you arrive/depart late in the evening or early in the morning). As an alternative, some buses can take you either to the city center or to the Milan central station.

Milano Malpensa is a little further away from Milan city centre, but it offers a larger number of international connections while being still well connected to Milan's central station with buses and a dedicated train (Malpensa Express).

Bergamo Orio al Serio also has buses services that connect the airport to Milan central station, but it’s the farthest from the city.

How to reach HT

HT is well connected with the city of Milan with by:

Regional trains (S5,S6,S11) available every 15-20 minutes
Metro line (red)
Regional train/metro stop is Rho-Fiera Milano: from here, electric shuttles and scooters bring you to Human Technopole in 5 minutes.

 

 

📅 Save the date!
Registration
Advancements in Cell Tracking: The Mastodon Solution for Large-Scale Datasets
    • 09:00
      ☕ Welcome Coffee & Registration HT Auditorium

      HT Auditorium

    • 09:30
      🎤 Opening Remarks HT Auditorium

      HT Auditorium

    • 👨‍🔬 Selected Talks: Fiji Progress and Priorities 2024 (Curtis Rueden, University of Wisconsin-Madison) HT Auditorium

      HT Auditorium

      • 1
        Fiji Progress and Priorities 2024 HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        The Fiji platform, a cornerstone in scientific imaging, has undergone several major developments to meet evolving community needs, focusing on cross-language integration and modernization to enhance its utility and accessibility.

        Recent key initiatives include: upgrading Fiji to OpenJDK 21 with the new Jaunch launcher, which supports both JVM and Python runtimes; developing SciJava Ops for standardizing scientific algorithms; introducing the Appose library to facilitate deep learning integration via cross-language interprocess communication; and releasing the napari-imagej plugin to expose Fiji functionality within napari.

        The OpenJDK 21 upgrade enables many new JVM-based technologies, such as improved 3D visualization through the sciview plugin. The 1.0.0 release of SciJava Ops marks a milestone in fostering FAIR and extensible algorithms. The Appose library has enabled new plugins like SAMJ, leveraging deep learning methods such as Meta's Segment Anything Model. The napari-imagej plugin allows seamless integration between ImgLib2 and NumPy data structures, further extending Fiji's utility within the Python ecosystem.

        These advancements significantly enhance Fiji's interoperability, Python integration, and overall modernization, positioning it to better serve the evolving needs of the scientific imaging community.

        Speaker: Curtis Rueden (University of Wisconsin-Madison)
    • ✨ Invited Speaker: Title to be announced (Ricardo Henriques, Instituto Gulbenkian de Ciência, IGC - University College London, UCL) HT Auditorium

      HT Auditorium

    • 10:30
      🚶‍♀️ Shuffle time Human Technopole

      Human Technopole

      Milan, Italy

      Time to move from the Auditorium to the meeting rooms for the Workshop Session 💻

    • 💻 Workshop Session #1 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 2
        Automating ImageJ without coding: A hands-on introduction to JIPipe Upper Egg Room - Small (Human Technopole)

        Upper Egg Room - Small

        Human Technopole

        This workshop will give an introduction into the ImageJ-based visual programming language JIPipe. Attendees will be guided step-by-step through the process of building batch processing workflows of ImageJ-functions and JIPipe-exclusive features. All demonstrations will be based on already published projects.

        Speaker: Ruman Gerst (Hans Knöll Institute)
      • 3
        clEsperanto: GPU-accelerated image processing library PIT.P01.026 (Human Technopole)

        PIT.P01.026

        Human Technopole

        The clEsperanto project equips users and developers with GPU-accelerated image-processing components. It ships a library with which you will be able to build fast bioImage analysis scripts, by leveraging modern GPUs.

        Positioned as a classical Image Processing library, it is built on a framework that makes it available in many language (Python, Java, C++), in many software platforms (Fiji, Napari, Icy), and is compatible with any GPU brand and model. Additionally, all implementation adopt a standardized naming convention and integrate seamlessly with graphical user interfaces for the Fiji and Napari platforms, further reducing the entry barrier. Collectively, these features enhance reusability, reproducibility, and maintainability, significantly lowering the learning curve for GPU-accelerated BioImage Analysis for both users and developers.

        The workshop will offer an overview of the library, thoroughly covering its features, and will also provide an introduction to GPU hardware and its use. Participants will then be guided through the usage of the library in Python, with practical examples and real-world use cases. The workshop will conclude with a demonstration of the library's application in Fiji and Napari.

        You should attend if your job involves building image analysis pipelines. Participants should have a basic understanding of Python and have conda installed on their computer to fully engage with the workshop.

        Speaker: Stéphane Rigaud (Institut Pasteur)
      • 4
        Intermediate napari: from exploratory workflow to widgets Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        The workshop will involve live coding, showing how one can use a Jupyter notebook for exploratory data analysis in napari and then make simple widgets based on that. The target is someone who has some familiarity with the napari application and bioimage analysis in Python and wants to take the next steps to customize/extend the napari GUI. For this, magicgui will be introduced as a way to generate widgets quickly and easily.

        Speaker: Peter Sobolewski (The Jackson Laboratory)
      • 5
        Leveraging the ImgLib2/BigDataViewer ecosystem for efficient batch inspection of large bioimage analysis datasets. Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        Deriving scientifically sound conclusions from microscopy experiments typically requires batch analysis of large image data sets. Once the analysis has been conducted it is critical to visually inspect the results to identify errors and to make scientific discoveries. Leveraging the ImgLib2/BigDataViewer ecosystem we developed a platform in which large image data sets can be conveniently inspected in a grid view mode. Each grid position contains one (multi-channel volumetric time-lapse) image and the corresponding segmentation results. All data are lazy loaded thereby supporting very large datasets. This image grid view is linked with a table view and a scatterplot view, in which measurements of segmented objects can be inspected. Objects can be highlighted, coloured, annotated and selected in all views. All functionality is easily accessible via the MoBIE update site of Fiji.
        In this workshop, we will demonstrate how to this generic framework can be used for batch inspection of data produced with Fiji, Cellprofiler, ilastik, and Nextflow managed python scripts.

        Speaker: Christian Tischer (EMBL)
      • 6
        Object Tracking and track analysis using TrackMate and CellTracksColab PIT.P01.011 (Human Technopole)

        PIT.P01.011

        Human Technopole

        In life sciences, tracking objects within movies is crucial for quantifying the behavior of particles, organelles, bacteria, cells, and entire organisms. However, tracking multiple objects across numerous movies and analyzing the objects’ movements can be challenging. This workshop aims to demonstrate the effective utilization of TrackMate for object tracking across multiple movies through hands-on exercises. Additionally, participants will learn how to compile, analyze, and explore the acquired tracking data using the CellTracksColab platform. Both tools offer user-friendly interfaces tailored to life scientists without coding experience.

        Speaker: Joanna Pylvänäinen (Åbo Akademi University)
      • 7
        Omega – Harnessing the Power of Large Language Models for Bioimage Analysis Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        Explore Omega, an advanced conversational agent designed to enhance bioimage analysis through large language models. Participants will learn how to converse with Omega to perform basic image processing and analysis tasks, from basic filtering to more advanced tasks such as segmenting cell nuclei, creating custom image processing widgets, and applying denoising algorithms. Additionally, the workshop will highlight Omega’s AI-augmented code editor, which supports automatic code commenting, error correction, and the creation of reusable code snippets. This session aims to show how Omega can streamline image analysis workflows, making complex tasks more accessible and efficient for biomedical researchers.

        Speaker: Loic A. Royer (CZ Biohub)
      • 8
        QuPath for Fiji Fans HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        QuPath is a popular open-source platform for visualizing, annotating and analyzing complex images - including whole slide and highly multiplexed datasets. QuPath is inspired and influenced by ImageJ and Fiji, but also quite different.

        This workshop will show how experienced ImageJ and Fiji users can benefit from QuPath, and vice versa. It will show how to use ImageJ and QuPath together interactively, as well as through macros and scripts. We'll also look at some key similarities and differences between the software, to help decide when and where to use each application.

        Speaker: Peter Bankhead (University of Edinbrugh)
      • 9
        Width Profile Tools PIT.P02.029 (Human Technopole)

        PIT.P02.029

        Human Technopole

        The question "How thick is my biological object?" is more difficult to answer than it appears on first sight. In this workshop we will use different methods to calculate width profiles of 2D and 3D objects and learn about their differences. We will use FIJI/ImageJ in the workshop.

        Speaker: Volker Baecker (French Institute of Health and Medical Research)
    • 11:45
      🚶‍♀️ Shuffle time Human Technopole

      Human Technopole

      Milan, Italy

      Time to move from the meeting rooms to the Auditorium.

    • 👩‍💻 Flash Talks HT Auditorium

      HT Auditorium

      • 10
        Automated mapping of 3D smFISH gene expression data to electron microscopy Human Technopole

        Human Technopole

        HT Auditorium

        Joint analysis of different data modalities is a promising approach in developmental biology which allows to study the connection between cell-type specific gene expression and cell phenotype. Normally, using analysis methods in correlative manner poses a lot of limitations; however, studying a stereotypic model organism gives a unique opportunity to jointly analyze data obtained from different individual animals. In this work we show how 3D smFISH spatial transcriptomics data can be used to make a link between high-resolution electron microscopy volume and scRNAseq atlas of 6-days post fertilization young Platynereis worm consisting of several thousands of cells organised into distinct cell types. To enable systematic mapping of non-spatial scRNAseq clusters to EM volume we developed a deep learning-based, fully automated pipeline for registration of smFISH volumes to EM volumes. We demonstrate visualisation of multimodal data in MoBIE and explore signal quantification in 3D smFISH data. Deep-learning based registration enables large-scale integration of different modalities aiding in the interpretation of both data types and validation of biological hypotheses.

        Speaker: Elena Buglakova (EMBL)
      • 11
        Machine learning based Evaluation and Enhancement (EVEN) for optical microscopy Human Technopole

        Human Technopole

        HT Auditorium

        The translation of raw data into quantitative information, which is the ultimate goal of microscopy-based research studies, requires the implementation of standardized data pipelines to process and analyze the measured images. Image quality assessment (IQA) is an essential ingredient for the validation of each intermediate result, but it frequently relies on ground-truth images, visual perception, and complex quality metrics, highlighting the need for interpretable, automatic and standardized methods for image evaluation. We present a workflow for the integration of quality metrics into machine learning models to obtain automatic IQA and artifacts identification. In our study, a classification model is trained with no-reference quality metrics to discern high quality images and measurements affected by experimental artifacts, and it is utilized to predict the presence of artifacts and the image quality in unseen datasets. We present an application of our method to the Evaluation and Enhancement (EVEN) of uneven illumination corrections for multimodal microscopy. We show that our method is easily interpretable and that it can be used for the generation of quality rankings and the optimization of processed images.

        Speaker: Elena Corbetta (Friedrich-Schiller-Universität Jena)
      • 12
        Exploring gene function and morphology using JUMP Cell Painting Consortium data Human Technopole

        Human Technopole

        HT Auditorium

        With the Cell Painting assay we quantify cell morphology using six dyes to stain eight cellular components: Nucleus, mitochondria, endoplasmic reticulum, nucleoli, cytoplasmic RNA, actin, golgi aparatus, and plasma membrane. After high-throughput fluorescence microscopy, image analysis algorithms then extract thousands of morphological features from each single cell’s image. By comparing of these “profiles” we can can uncover new relationships among genetic and chemical perturbations.

        The JUMP-CP Consortium (Joint Undertaking for Morphological Profiling-Cell Painting) released the first public high-throughput dataset with over 140,000 genetic and chemical perturbations.

        Speaker: Alán Fernando Muñoz González (Broad Institute of Harvard and MIT)
      • 13
        A Bayesian solution to count the number of molecules within a diffraction limited spot Human Technopole

        Human Technopole

        HT Auditorium

        We propose a new method of molecular counting, or inferring the number of fluorescent emitters when only their combined intensity contributions can be observed. This problem occurs regularly in quantitative microscopy of biological complexes at the nano-scale, below the resolution limit of super-resolution microscopy, where individual objects can no longer be visually separated. Our proposed solution directly models the photo-physics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model. Given an ROI of a time series image containing an unknown number of emitters, our model jointly estimates the parameters of the intensity distribution, their blinking rates, as well as a posterior distribution of the total number of fluorescent emitters. We show that our model is consistently more accurate and increases the range of countable emitters by a factor of two compared to current state-of-the-art methods, which count based on autocorrelation and blinking frequency. Furthermore, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability, and therefore can inform experimental conditions that will maximize counting accuracy.

        Speaker: Alexander Hillsley (Howard Hughes Medical Institute)
      • 14
        pyCyto: A Pythonic Cytotoxicity Analysis Pipeline Human Technopole

        Human Technopole

        HT Auditorium

        In this work, we present a comprehensive Python tool designed for automated large-scale cytotoxicity analysis, focusing on immune-target cell interactions. With the capacity to handle microscopic imaging with a 24-hours imaging duration of up to 100,000 cells in interaction per frame, pyCyto offers a robust and scalable solution for high-throughput image analysis. Its architecture comprises four integrated layers: pythonic classes, a command-interface (CLI), YAML pipeline control, and a SLURM distributed version. The pythonic classes provide a flexible and object-oriented foundation that wraps over extensive task-specific libraries for image preprocessing, deep learning-based segmentation, cell tracking, GPU-accelerated contact and killing analysis. The CLI enables interaction with individual steps via terminals, hence enabling scalable and repeatable batch bioimage analysis. To enable a streamlined end-to-end analysis workflow, pyCyto includes a YAML pipeline for high-level configuration that facilitates analysis configurations and management of complex steps in human-readable file format. In bundle with SLURM distributed version, the YAML pipeline control file is compatible with computation devices at different scales that offer maximal scalability from edge computing to high-performance computing (HPC) clusters, significantly enhancing data throughput and analysis efficiency. The package is originally designed to support multi-channel fluorescent microscopy images, with extensible adaptability for various live imaging modalities, including brightfield, confocal, lightsheet and spinning disk microscopes. The seamless integration of various stages of bioimage analysis ensures detailed cellular activities assessment, making it an invaluable tool for researchers in immunology and cell biology.

        Speaker: Jacky Ka Long Ko (University of Oxford)
      • 15
        Fractal: An open-source framework for reproducible bioimage analysis at scale using OME-Zarrs Human Technopole

        Human Technopole

        HT Auditorium

        Analyzing large amounts of microscopy images in a FAIR manner is an ongoing challenge, turbocharged by the large diversity of image file formats and processing approaches. Recent community work on an OME next-generation file format offers the chance to create more shareable bioimage analysis workflows. Building up on this and to address issues related to the scalability & accessibility of bioimage analysis pipelines, the BioVisionCenter is developing Fractal, an open-source framework for processing images in the OME-Zarr format. The Fractal framework consists of a server backend & web-frontend that handle modular image processing tasks. It facilitates the design and execution of reproducible workflows to convert images into OME-Zarrs and apply advanced processing operations to them at scale, without the need for expertise in programming or large image file handling. Fractal comes with pre-built tasks to perform instance segmentation with state-of-the-art machine learning tools, to apply registration, and to extract high-dimensional measurements from multiplexed, 3D image data at the TB scale. By relying on OME-Zarr-compatible viewers like napari, MoBIE and ViZarr, Fractal enables researchers to interactively visualize terabytes of image data stored on their institution’s remote server, as well as the results of their image processing workflows.

        Speaker: Joel Lüthi (University of Zurich)
      • 16
        Enhancing Synaptic-Resolution Connectomics with an Open-Source AI Ecosystem Human Technopole

        Human Technopole

        HT Auditorium

        While recent advancements in computer vision have greatly benefited the analysis of natural images, significant progress has also been made in volume electron microscopy (vEM). However, challenges persist in creating comprehensive frameworks that seamlessly integrate various machine learning (ML) algorithms for the automatic segmentation, detection, and classification of vEM across varied resolutions (4 to 100 nanometers) and staining protocols. We aim to bridge this gap with Catena (github: https://github.com/Mohinta2892/catena/tree/dev): a unified, reproducible and cohesive ecosystem designed for large-scale vEM connectomics. Catena combines conventional deep learning methods with generative AI techniques to minimise model bias and reduce labour-intensive ground-truth requirements. This framework integrates existing state-of-the-art algorithms for - a) neuron/organelle segmentation, b) synaptic partner detection, c) microtubule tracking, d) neurotransmitter classification and e) domain adaptation models for EM-to-EM translation, while identifying and addressing limitations in these methods. As an open-source software framework, Catena equips both large and small labs with powerful and robust tools to advance scientific discoveries with their own vEM datasets at scale.

        Speaker: Samia Mohinta (University of Cambridge, UK)
      • 17
        Continuous, interpretable, and transformation-invariant Morphometric for dynamic shape quantification Human Technopole

        Human Technopole

        HT Auditorium

        Biological systems undergo dynamic developmental processes involving shape growth and deformation. Understanding these shape changes is key to exploring developmental mechanisms and factors influencing morphological change. One such phenomenon is the formation of the anterior-posterior (A-P) body axis of an embryo through symmetry breaking, elongation, and polarized Brachyury gene expression. This process can be modeled using stem-cell-derived mouse gastruloids (Veenvliet et al.; Science, 2020), which may form one or several A-P axes, modeling both development and disease.

        We propose a way of quantifying and comparing continuous shape development in space and time. We emphasize the necessity of a structure-preserving metric that captures shape dynamics and accounts for observational invariances, such as rotation and translation.

        The proposed metric compares the time-dependent probability distributions of different geometric features, such as curvature and elongation, in a rotationally invariant manner using the signed distance function of the shape over time. This enables the integration of time-dependent probability distributions of gene expression, thus coupling geometric and genetic features.

        Importantly, the metric is differentiable by design, rendering it suitable for use in machine-learning models, particularly autoencoders. This allows us to impose the structure of the shape dynamics in a latent-space representation.

        We benchmark the metric's effectiveness on synthetic data of shape classification, validating its correctness. We then apply the new metric to quantifying A-P body axis development in mouse gastruloids and predict the most likely resulting shapes. This approach can potentially leverage predictive control, enabling the application of perturbations to guide development towards desired outcomes.

        Speaker: Roua Rouatbi (TU Dresden)
      • 18
        An Image Analysis Pipeline for Quantifying the Spatial Distribution of Fluorescently Labeled Cell Markers in Stroma-Rich Tumors Human Technopole

        Human Technopole

        HT Auditorium

        Dense, stroma-rich tumors with high extracellular matrix (ECM) content are highly resistant to chemotherapy, the standard treatment. Determining the spatial distribution of cell markers is crucial for characterizing the mechanisms of potential targets. However, an end-to-end computational pipeline has been lacking. Therefore, we developed a robust image analysis pipeline for quantifying the spatial distribution of fluorescently labeled cell markers relative to a modeled stromal border. This pipeline stitches together common models and software: StarDist for nuclei detection and boundary inference, QuPath’s Random Forest model for cell classification, pixel classifiers for stromal region annotation, and signed distance calculation between cells and their nearest stromal border. We also extended QuPath to support sensitivity analysis to ensure result consistency across the parameter space. Additionally, it supports comparing classification results to statistically propagated expert prior knowledge across images. Notably, our pipeline revealed that the signal intensity of Ki67 and pNDRG1 in cancer cells peaks around the stromal border, supporting the hypothesis that NDRG1, a novel DNA repair protein, directly links tumor stroma to chemoresistance in pancreatic ductal adenocarcinoma. The codebase, image datasets, and results are available at https://github.com/HMS-IAC/stroma-spatial-analysis.

        Speaker: Antoine Ruzette (Harvard Medical School)
      • 19
        Unsupervised Denoising for Signal-Dependent and Row-Correlated Imaging Noise Human Technopole

        Human Technopole

        HT Auditorium

        Accurate analysis of microscopy images is hindered by the presence of noise. This noise is usually signal-dependent and often additionally correlated along rows or columns of pixels. Current self- and unsupervised denoisers can address signal-dependent noise, but none can reliably remove noise that is also row- or column-correlated. Here, we present the first fully unsupervised deep learning-based denoiser capable of handling imaging noise that is row-correlated as well as signal-dependent. Our approach uses a Variational Autoencoder (VAE) with a specially designed autoregressive decoder. This decoder is capable of modeling row-correlated and signal-dependent noise but is incapable of independently modeling underlying clean signal. The VAE therefore produces latent variables containing only clean signal information, and these are mapped back into image space using a proposed second decoder network. Our method does not require a pre-trained noise model and can be trained from scratch using unpaired noisy data. We benchmark our approach on microscopy datatsets from a range of imaging modalities and sensor types, each with row- or column-correlated, signal-dependent noise, and show that it outperforms existing self- and unsupervised denoisers.

        Speaker: Benjamin Salmon (University of Birmingham)
      • 20
        How To Train Your Image Analyst: Perspectives from Upskilled Biologists Human Technopole

        Human Technopole

        HT Auditorium

        As the field of biological imaging matures from pure phenotypic observation to machine-assisted quantitative analysis, the importance of multidisciplinary collaboration has never been higher. From software engineers to network architects to deep learning experts to optics/imaging specialists, the list of professionals required to generate, store, and analyze imaging data sets of exponentially increasing size and complexity is likewise growing. Unfortunately, the initial training of these experts in disparate fields (computer science, physics, biology) promotes the development of information silos that lack of a common parlance to facilitate collaboration. Here, we present the perspective of a two-person light microscopy core facility associated with the US National Institutes of Health (NIH), the Twinbrook Imaging Facility. The multidisciplinary education of our team members (biology, microscopy, and image analysis), along with our unique funding structure (a fixed budget rather than a fee-for-service model), allows us to develop long-term and productive collaborations with subject matter experts while promoting the exchange of important ideas. We highlight recent and ongoing projects at the facility that demonstrate the importance of skills diversity in core facility staffing.

        Speaker: Maria Traver (NIH / NIAID)
      • 21
        TopoStats: taking AFM analysis to new heights Human Technopole

        Human Technopole

        HT Auditorium

        High-resolution atomic force microscopy (AFM) provides unparalleled visualisation of molecular structures and interactions in liquid, achieving sub-molecular resolution without the need for labelling or averaging. This capability enables detailed imaging of dynamic and flexible molecules like DNA and proteins, revealing their own conformational changes as well as interactions with one another. Despite its powerful potential, AFM’s application in the biosciences is hindered by challenges in image analysis, including inherent imaging artefacts and complexities in data extraction. To address this, we developed TopoStats, an open-source Python package for high-throughput processing and analysis of raw AFM images. TopoStats provides an automated pipeline for file loading, image filtering, cleaning, segmentation, and feature extraction, producing clean, flattened images and detailed statistical information on single molecules. We showcase TopoStats’ capabilities by demonstrating its use for automated quantification across a range of samples - from DNA and DNA-protein interactions to larger-scale materials science applications. Our aim is for TopoStats to significantly enhance the quality of AFM data analysis, support the development of robust and open analytical tools, and contribute to the advancement of AFM research worldwide.

        Speaker: Laura Wiggins (University of Sheffield)
    • 12:30
      🚶‍♀️ Walking time to Triulza Academy From HT to Triulza Academy

      From HT to Triulza Academy

      From HT to Triulza Academy

      Time to walk to Triulza Academy (venue for lunch and poster session)

    • 13:00
      🍝 Lunch Triulza Academy

      Triulza Academy

    • 👩‍🏫 Poster Session #1 Triulza Academy

      Triulza Academy

      • 22
        A Cloud-Native Virtual Bioimage Analysis Research Desktop (BARD) for Deployment of Containerised Bioimage Tools on Kubernetes

        Bioimage analysis research workflows often require the use of various software tools and demand significant computational power and high interactivity. These workflows can result in inconsistent results due to dependencies on particular software and operating systems, an issue that becomes especially evident as computationally intensive methods like deep learning become more common.
        To tackle the challenge, we have developed BARD, a Kubernetes-based virtual bioimage analysis research desktop service, leveraging the abcdesktop.io project (https://www.abcdesktop.io/). It enables bioimage analysts to quickly access a personal cloud-based desktop capable of handling complex computations and image processing tasks, eliminating the local hardware limitations.Each software in the BARD desktop is containerised along with its specific versions and required dependencies. This ensures a consistent environment across different computing platforms, enabling researchers to reproduce identical experimental results from any computer, anywhere. By leveraging Kubernetes, BARD offers a resource-efficient alternative to virtual machines, reducing the consumption of computational resources such as CPU and memory usage,as well as streamlining software deployment.

        Speaker: Arif Khan (EMBL Heidelberg)
      • 23
        A Comprehensive Image Analysis Pipeline for Investigating Autism Spectrum Disorder-like Behaviours in Drosophila melanogaster

        Understanding the underlying mechanisms of Autism Spectrum Disorder (ASD) requires detailed analysis of behaviour in model organisms. Here, we present a complete image analysis pipeline designed to analyze ASD-like behaviours in Drosophila, providing insights into the associated mechanisms.

        Our pipeline begins with the acquisition of high-resolution, large-format videos capturing Drosophila behaviour. We employ advanced tracking algorithms to accurately trace the movement of individual flies, followed by rigorous post-processing steps to clean and refine the tracking data. From this, we extract precise trajectories and positional information for each fly.

        Subsequently, we calculate key behavioural metrics such as locomotion patterns and social distances. Further analysis explores specific behaviours pertinent to ASD research, such as repetitive behaviour (grooming) and comorbidities, including aggression. By quantifying these behavioural outputs, our pipeline offers robust tools for recording and analyzing in depth a spectrum of different ASD-like behaviours.

        This comprehensive approach not only facilitates high-throughput analysis but also enhances the reliability and depth of behavioural studies in Drosophila, enabling researchers to draw more nuanced conclusions about the underlying mechanisms in health and disease.

        Speaker: Arianna Ravera (University of Lausanne)
      • 24
        A contour-based alignment tool enabling to merge data from live acquisition & immunofluorescence Triulza Academy

        Triulza Academy

        Immunofluorescence is a powerful technique for the detection of multiple specific markers in cells; however, the fixation process prevents the study of the cells’ motility. We therefore propose a tool that helps find back tracked cells after immunofluorescence (IF). A slice of tissue is imaged during 48h at different positions with a 20x objective, creating movies on which cells are manually tracked. For each position, an additional image is captured, using transmitted light microscopy and a 4x objective, for a bigger field of view. Based on the stage displacements, a reconstruction is made with these images and the outlines of the whole biological structures are drawn by the user. Just after the movie, slices are immunostained to determine the cells’ fate and imaged under the microscope. An automatic segmentation of the structures is performed on these images. The tool developed in Fiji/MATLAB is aligning the contours obtained on two paired images and automatically determines the (similarity) transformation between the images. The 20x acquisition areas are displayed on the IF image, aiding in identifying the fate of each tracked cell.

        Speaker: Anne-Sophie MACE (CNRS)
      • 25
        Automated mapping of 3D smFISH gene expression data to electron microscopy HT & Triulza Academy

        HT & Triulza Academy

        Flash Talk - HT Auditorium Poster Session - Triulza Academy

        Joint analysis of different data modalities is a promising approach in developmental biology which allows to study the connection between cell-type specific gene expression and cell phenotype. Normally, using analysis methods in correlative manner poses a lot of limitations; however, studying a stereotypic model organism gives a unique opportunity to jointly analyze data obtained from different individual animals. In this work we show how 3D smFISH spatial transcriptomics data can be used to make a link between high-resolution electron microscopy volume and scRNAseq atlas of 6-days post fertilization young Platynereis worm consisting of several thousands of cells organised into distinct cell types. To enable systematic mapping of non-spatial scRNAseq clusters to EM volume we developed a deep learning-based, fully automated pipeline for registration of smFISH volumes to EM volumes. We demonstrate visualisation of multimodal data in MoBIE and explore signal quantification in 3D smFISH data. Deep-learning based registration enables large-scale integration of different modalities aiding in the interpretation of both data types and validation of biological hypotheses.

        Speaker: Elena Buglakova (EMBL)
      • 26
        Automated Tracking and Analysis of Plasma Membrane Dynamics in TIRF-SIM

        Plasma membrane processes like clathrin-mediated endocytosis and constitutive exocytosis are often studied using diffraction-limited imaging methods. Structured Illumination Microscopy (SIM) with Total Internal Reflection Fluorescence (TIRF) offers a super-resolution technique for these studies in living cells. However, no automated, publicly available tool has existed for processing TIRF-SIM time-lapse images. We present an open-source image processing pipeline for tracking and analysing plasma membrane processes in high frame-rate (10FPS), high-resolution TIRF-SIM movies. It significantly improves tracking accuracy, nearing human-level performance, with over 10% improvement in MOTA and HOTA metrics compared to previous methods for diffraction-limited data, as validated on a new publicly available expert-annotated benchmark. A key feature is its ability to leverage the high resolution to evaluate the productivity of tracked events, accurately identifying vesicles that successfully transport cargo across the plasma membrane. Our tool enhances accuracy and reduces manual intervention, facilitating high-throughput analysis of dynamic cellular events. This advancement is crucial for deepening our understanding of plasma membrane processes and their regulation in maintaining cell homeostasis.

        Speaker: Adam Harmanec (Masaryk University, Brno & Charles University, Prague)
      • 27
        Automating the Neuronal Differentiation of Ntera-2 Cells

        Stem cells are unique in their self-renewal and differentiation capacity into many cell types. They are an attractive platform for modelling human tissue in vitro, but producing homogenous cultures is challenging due to limited control of the microenvironment. Automating differentiation protocols can reduce human error and increase efficiency and yield.

        Here we present an open-source robotic platform for cell culture maintenance and differentiation that boosts the reproducibility of protocols and promotes the accessibility of automated experiments. The robot allows for multi-well solution application and includes a microscope with a fluorescent blue light filter for simultaneous imaging.

        We aim to automate the differentiation of Ntera-2 cells (NT2Cs), a pluripotent embryonic carcinoma cell line, into functional neurons. Differentiation of NT2Cs can be induced by retinoic acid (RA) application; however, cell populations exhibit significant phenotype heterogeneity. Optimised differentiation will be achieved by automated testing of thousands of conditions with variations of exposure time, concentration, and application dynamics of RA. Cell differentiation is monitored by cell morphology and automatic immunostaining for surface antigens and intracellular proteins for pluripotency (TRA-1-60) and neuronal differentiation (A2B5).

        Speaker: Heather McCourty (University of Sheffield)
      • 28
        Beyond spot detection with spotMAX

        Sub-cellular structures that are visualized as spots with fluorescence microscopy are ubiquitous in microscopy data. However, automated and accurate detection of such spots is often a challenging task. Additionally, many microscopy datasets contain multiple channels, where in addition to the spots and the cells also a second structure is visualized, such as the nucleus in single-molecule FISH experiments. However, this information is often not exploited in the context of spot detection.
        Here we present spotMAX, a fully automated image analysis pipeline that takes advantage of multi-dimensional information such as time-lapse and multi-color imaging. Fully integrated into our previously published software Cell-ACDC, spotMAX combines state-of-the-art segmentation, tracking, and cell-pedigree analysis of single-cells with detection and quantification of fluorescent globular structures over time. SpotMAX can also automatically segment a reference structure enabling further filtering and quantification of valid spots.
        We extensively benchmarked spotMAX and we show that consistently outperforms current SOTA models. Beyond spot detection, spotMAX provides a feature-rich space that can be used for downstream machine-learning tasks.
        To make spotMAX as generalist as possible we applied it to a variety of settings, with different imaging modalities, different microscopes, and multiple model organisms. For example, we used it to quantify the number of synaptonemal complexes in C. elegans. We analysed the dynamics of mitochondria homeostasis in yeast during nutrient change. Finally, we performed mRNA quantification from single-molecule FISH datasets and telomers/centromeres analysis in stem cells with DNA FISH.

        Speaker: Francesco Padovani (Helmholtz Munich)
      • 29
        Bilayers - an easy way to make your favorite deep learning tool more user-friendly

        The past 10 years have seen an explosion of algorithms, especially deep learning algorithms, which can make bioimage analysis for specific problems vastly more straightforward. Unfortunately, the median computational comfort for biologists is not always high enough to enable them to install deep learning tools successfully, and the vast majority are not comfortable working at the terminal or with command line tools. While user friendly tools such as DeepImageJ, ZeroCostDLForMic, BiaPy and CellProfiler can assist with making these algorithms more user friendly, there is to our knowledge no standard for describing what are the input and output structures for a given image analysis algorithm, and many require extremely complex post-processing beyond the algorithm stored in places like the Bioimage Model Zoo. Algorithm developers may not have time to design a complex user interface to make their tool or algorithm user friendly, meaning many helpful tools never reach the end user who could benefit from them most. We have therefore launched Bilayers, a specification designed to characterize software containers in terms of expected inputs, expected outputs, and tunable parameters. Any container described in a Bilayers configuration file can then use the Bilayers CI/CD platform to programmatically generate versions of the containers which have user-friendly, no-terminal-required interfaces, starting with Gradio and Jupyter but with plans to expand to other user interfaces, including CellProfiler plugins. Since end-users are using a container which can also be deployed to an HPC or to the cloud, there is no danger of version drift between the prototype used by a biologist and the large-scale workflow run at scale, and we believe such a specification will also aid sysadmins create and validate workflows created for end users in tools such as Nextflow or Snakemake. We will demonstrate existing use cases, as well as our configuration specification, with hopes for community feedback and contributions.

        Speaker: Beth Cimini (Broad Institute)
      • 30
        BioImage.IO Chatbot: A Community-Driven AI Assistant for Integrative Computational Bioimaging

        The landscape of computational bioimaging is rapidly evolving, featuring a vast array of tools, workflows, and documentation. This growth underscores the need for more accessible analytical tools to deliver comprehensive capabilities to the scientific community. The BioImage.IO Chatbot, developed in response to this need, leverages Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) techniques between others to offer an interactive and intuitive platform for engaging with advanced bioimage analysis resources. This innovative tool not only retrieves information and executes pre-existing models but also enables users to create custom extensions tailored to their specific research needs, such as automating microscopy operations or crafting specialized analysis workflows. A primary objective of the BioImage.IO Chatbot is to lower the barriers to entry for researchers, fostering a more collaborative and enriched scientific community. Designed to be community-driven, the chatbot continually evolves, integrating various tools, workflows, and knowledge contributed by users. We actively encourage community contributions to expand the chatbot’s capabilities, ensuring it remains at the cutting edge of bioimage analysis technology. All in all, the BioImage.IO Chatbot is not just changing the landscape of bioimage analysis but revolutionizing it.

        Speaker: Caterina Fuster Barceló (Universidad Carlos III de Madrid)
      • 31
        BrainGlobe: Accessible software for neuroanatomy of emerging model organisms

        BrainGlobe is a community-driven initiative built around a set of interoperable, user-friendly, open-source software tools for neuroanatomy. It provides atlases for human, mouse, rat and zebrafish brains through a consistent interface. Each atlas includes a “standard” reference image for sample registration and corresponding brain region segmentations. This allows data from several samples to be analysed in the same coordinate space, within and across studies.

        Neuroscience increasingly recognises the need to study brain function in more diverse species. BrainGlobe is species- and imaging-modality agnostic and is therefore a natural fit to enable the necessary data analysis. However, this requires novel atlases.

        Working with expert neuro-anatomists studying emerging model organisms, we have created a novel digital brain atlas for the Eurasian blackcap Sylvia atricapilla. We have additionally incorporated existing atlases for the Mexican blind cavefish Astyanax mexicanus and axolotl Ambystoma mexicanum into BrainGlobe.

        This work shows how BrainGlobe facilitates collaboration between anatomists, microscopists and software developers with large benefits to cutting-edge research. Going forward, we hope atlas-based image analysis is expanded to even more species, and other organs.

        Speaker: Alessandro Felder (University College London)
      • 32
        Cell lineage reconstruction and comparison from light-sheet microscopy image datasets

        A longstanding quest in biology is to understand how the fertilized egg gives rise to multiple cell types organized in tissues and organs of specialized shape, size and function to suit the lifestyle of multicellular organisms. The integration of molecular, microscopy and image analysis techniques increasingly enables scientists to quantify developmental processes in live developing embryos with single-cell resolution. Here, crustacean Parhyale hawaiensis embryos with fluorescently labeled nuclei were imaged for 3.5 days using a SiMView light-sheet fluorescence microscope. A combination of Matlab and Fiji/ImageJ pipelines were used to produce spatiotemporally registered and fused 4D (3D+time) volumes of each developing embryo. Complete cell lineages, represented as tree graphs, were reconstructed with the Mastodon plugin of Fiji/ImageJ. A new Napari plugin has been developed for pairwise comparisons of cell lineages through the efficient calculation of the (dis)similarity between tree graphs, referred to as the tree edit distance. Visualization of cell lineage comparisons with heat maps and clones on digital embryos paves the way for systems-wide studies of the levels of stereotypy and variability of (sub)lineages within embryos, across embryos and conditions.

        Speaker: Ioannis Liaskas (Institute of Molecular Biology and Biotechnology of the Foundation for Research and Technology Hellas (IMBB-FORTH))
      • 33
        Computational modeling of nanoscale synapse morphology and organization using correlative super-resolution microscopy

        Recent advances in super-resolution fluorescence microscopy have enabled the study of nanoscale sub-cellular structures. However, the two main techniques used—Single Molecule Localization Microscopy (SMLM) and STimulated Emission Depletion (STED)—are fundamentally different and difficult to reconcile. We recently published a protocol enabling to perform both techniques on the same neuron sequentially, allowing the superimposition of post-synaptic protein distribution onto dendritic protrusions shapes. Nevertheless, automatic segmentation and correlation of these data pose significant challenges.

        We propose to use this protocol to create digital models of representative synapse morphologies, in which density maps highlight the preferred locations of post-synaptic proteins. First, we use a pipeline employing generative adversarial networks and style transfer to attenuate intensity variations in the STED image, facilitating the automatic segmentation of labeled neurons. Then, we segment, classify and analyze spine morphologies using a Delaunay triangulation computed from the segmented neurons. Finally, we non-linearly project the protein localizations of a given SMLM acquisition onto their corresponding spines.

        This high-versatility pipeline allows to aggregate dozens of spine morphologies, creating class-wise digital models of synapses featuring protein density maps.

        Speaker: Antoine J.-F. Salomon (IINS)
      • 34
        conv-paint: an easy to train interactive pixel classifier for napari

        We will learn how to use conv-paint, a fast and interactive pixel classification tool for multi dimensional images. A graphical user interface is integrated into the image viewer napari, but we will also learn how to script the software from the python ecosystem. As a napari-plugin, conv-paint can easily be integrated with other plugins into complex image processing pipelines, even by users unfamiliar with coding. Advanced developers will get pointers in how to integrate their own feature extractors into the conv-paint architecture.

        Speaker: Lucien Hinderling (Institute of Cell Biology - Universität Bern)
      • 35
        Deep learning enables cross-modality super-resolution for volumetric reconstruction

        Fluorescence microscopy is an indispensable tool to visualize spatio-temporal biological mechanisms at sub-micron resolution across all areas of life sciences. However, microscopists still face a difficult trade-off between imaging resolution, throughput, light sensitivity, and scale. In practice, balancing these factors often comes down to limiting the scope of investigation: for example, while STED offers sub-diffraction limit resolution, its photodamage makes volumetric imaging infeasible. Previous deep learning-based methods to overcome these barriers required static modeling of the imaging system or matched pairs from both source and target modalities. To address these challenges, we present a deep generative modeling approach that achieves high-definition volumetric reconstruction with resolution matching the target modality, using only two-dimensional projection images. We demonstrate image resolution enhancement between not only different imaging conditions but also different imaging modalities. We also provide biological validations showing how our method reveals previously unresolved details while effectively bypassing the trade-offs. We expect our method to be a valuable tool for enhancing various downstream tasks in image analysis for high-throughput imaging.

        Speaker: Hyoungjun Park (EMBL & ETH Zürich)
      • 36
        Deep learning-based classification for label-free microscopy

        Microscopic imaging enables us to investigate cells and how they change, but since subtle changes are hard to see by eye, we need tools such as deep learning to help us see.

        Here, we are combining label-free microscopy with deep learning to predict stem cell differentiation outcomes. This is highly relevant, as the differentiation process is labor intensive, costly and subject to high variability.

        We have differentiated induced pluripotent stem cells towards beta-cells for the first four days of a standard protocol. On day four, we measured Cxcr4 expression as a marker for successful entrance of the Definitive Endoderm stage. We acquired phase-contrast images every hour and combined these images with Cxcr4 expression levels to re-train different pre-trained deep neural networks. The retrained models were then used to classify unseen images according to their Cxcr4 expression.

        With our retrained ResNet18, we can classify experiments into high and low amount of Definitive Endoderm cells already before day four with an accuracy larger than 0.9. This enables selection of the most successful differentiation early on, saving time and money in the lab.

        Speaker: Franziska Schöb (University of Oslo)
      • 37
        Detecting immunological synapses in patient samples through analysis of Imaging Flow Cytometry data

        Immunological synapses, important in cancer immunotherapy, are transient structures formed at the interface between lymphocytes and antigen-presenting cells. To assess the efficacy of immune therapy responses, we present a bioimage analysis workflow for quantifying cell interactions and identifying immune synapses in cancer patient samples using Imaging Flow Cytometry (IFC).
        IFC acquires brightfield and fluorescence images of cells as they pass through the analyzer/sorter, and provides an opportunity to capture cellular interactions in large samples. Overcoming the limitations of the existing proprietary analysis software, we preprocess and merge exported tiff files into multichannel image stacks and employ Fiji scripting for in-depth image analysis.
        By combining several open-source components, this integrated approach allows for automated cell segmentation based on fluorescence markers and brightfield images, classification of various immune and tumor cell types, detection and quantification of cell-cell interfaces, and assessment of marker enrichment at immune synapses.

        Speaker: Bram van den Broek (The Netherlands Cancer Institute)
      • 38
        Diffusion Models in Microscopy: Bleedthrough Removal, Image Splitting, and Dehazing

        Image restoration methods often suffer from the "Regression to the Mean" (R2M) effect, leading to blurry results due to their inability to restore high-frequency details. This is problematic in microscopy, especially in widefield microscopy, where the loss of such fine details can deter subsequent analysis and downstream processing. In this work, we propose to tackle this challenge through a data-driven approach, combining a Hierarchical Variational Autoencoder (HVAE) with iterative restoration using denoising diffusion models. Diffusion models, with their forward process and iterative restoration have a higher likelihood of recovering a plausible mode for the degraded image. However, as we do not have a controllable forward operator in this case, we first model the step-wise degradation process using an HVAE and then employ a diffusion model to iteratively recover the image. Additionally, being a generative models, our techniques allow for sampling multiple plausible restored solutions, in contrast to traditional methods that predict only the MMSE estimate. In this work, we explore the challenge of quantifying the posterior distribution of our approach and present preliminary results of our method.

        Speaker: Anirban Ray (Human Technopole)
      • 39
        Discovering explanatory factors of spatial localization with point process models

        Advances in imaging and computation have enabled the detailed recording and quantification of spatial biological processes. However, there remains a gap in the statistical analysis and modeling of spatial information. A key challenge is to discover co-observed processes that statistically explain spatial localization patterns. We address this problem in the framework of spatial point processes, a class of statistical models over discrete objects in space, to quantify spatial localization patterns. In data sets with a high number of covariates, however, traditional combinatorial approaches become intractable or point process models lose interpretability. We propose a sparsity regularization for spatial point processes and show in several biological settings how this enables identifying the underlying explanatory factors. Moreover, we show how robust model selection can be used to make reliable predictions under noise, enabling inference directly from microscopy images. Finally, we discuss the interpretability of the resulting models and give an outlook on downstream inference tasks, such as clustering or the estimation of dynamic quantities.

        Speaker: Dominik Sturm (TU Dresden / MPI-CBG)
      • 40
        Enhancing Image Resolution Through Averaged Autocorrelation Inversion

        Cross-correlation is a versatile mathematical technique for analyzing image data. It provides insights into spatial distributions, temporal dynamics, and geometric colocalization by quantifying relationships between image components. With this project, we explore a specific application of cross-correlation, namely the autocorrelation, to boost image resolution in post-processing. We demonstrate how this simple mathematical tool can be used to combine multiple images of the same object, captured under different imaging conditions, into a single, higher-resolution representation. This approach offers a promising avenue for image reconstruction, particularly in multi-view light-sheet and image scanning microscopy. Building on this foundational research, we identify areas for enhancement and propose strategies to expand the method's capabilities, aiming at generalizing the approach to include more imaging methodologies.

        Speaker: Daniele Ancora (EMBL)
      • 41
        Example of a FBIAS project: mutliple fish tracking

        FBIAS is a network of image analysts that provides remote support to users lacking access to such expertises services, usually beginning with a 1-hour open-desk session followed by project-based assistance. A feasibility study to compare different multiple tracking methods applied to videos of rainbow trouts was conducted by FBIAS, inside Institut Curie in collaboration with IERP, an INRAE opened infrastructure for in vivo investigation of fish infectious diseases. The aim was to 1/evaluate method’s performance using Single Particle Tracking metrics; 2/automatize the detection of endpoints to quantitatively define the progression/ aggravation of pathogenesis. This requires long-term tracking, difficult due to the experimental setup and fish behavior (presence of blind spots, mirror effects, crossings). Another challenge is induced by the marker-free protocol. Both generic multiple-object tracking approaches (BoT-SORT, Deep OC-SORT, ByteTrack, OC-SORT) and specific methods designed for tracking multiple animals in laboratory settings (DeepLabCut, idtracker.ai) were assessed. DeepLabCut demonstrated superior performance on low-quality, low-resolution videos, while idtracker.ai performed well on higher quality and time resolution videos. Challenge remains in applying these solutions to behavioral studies.

        Speaker: Arthur Meslin (FBIAS, Institut Curie)
      • 42
        Exploring gene function and morphology using JUMP Cell Painting Consortium data

        With the Cell Painting assay we quantify cell morphology using six dyes to stain eight cellular components: Nucleus, mitochondria, endoplasmic reticulum, nucleoli, cytoplasmic RNA, actin, golgi aparatus, and plasma membrane. After high-throughput fluorescence microscopy, image analysis algorithms then extract thousands of morphological features from each single cell’s image. By comparing of these “profiles” we can can uncover new relationships among genetic and chemical perturbations.

        The JUMP-CP Consortium (Joint Undertaking for Morphological Profiling-Cell Painting) released the first public high-throughput dataset with over 140,000 genetic and chemical perturbations.

        Here, we describe how this data can now be used to answer many biological questions. Researchers can pick any gene of interest and find (a) what morphological phenotypes are induced when it is knocked-out or overexpressed, (b) what genes produce a similar morphological profile when altered, uncovering functional relationships, and (c) what chemical compounds produce similar morphological profiles, thus alluding to their target pathway. Novel software tools developed for this dataset empower both biologists and data analysts to make discoveries of their own, and we show that mining this dataset can yield novel insights into current and relevant biological questions.

        Speaker: Alán Fernando Muñoz González (Broad Institute of Harvard and MIT)
      • 43
        FAIR Image Analysis Workflows

        In our increasingly data-centric world, converting images into knowledge is a challenge across various scientific fields. Acknowledging the immense possibilities of this drive to visualise and understand data, the FAIR Image Analysis Workflow working group is developing FAIR image analysis workflows within the Galaxy platform. Although the Galaxy platform is well-established in scientific fields like genomics and proteomics, its potential still needs to be explored in the imaging domain to both experts and citizen scientists. Initiated in the summer of 2023, our objective has been to integrate popular image analysis tools into Galaxy and annotate them with EDAM Bioimaging ontology terms to enable the creation of reusable FAIR image analysis workflows. Our mission is to promote FAIR, reproducible image analysis workflows, and we invite all interested parties to join our monthly discussions and participate in our decision-making process. As we prepare for the 2024 I2K conference, we are eager to share our progress with the broader imaging community; in which we will showcase our vision, accomplishments, and future plans in this journey of transforming images into knowledge.

        Speaker: Beatriz Serrano-Solano (Euro-BioImaging)
      • 44
        Fiji Progress and Priorities 2024

        The Fiji platform, a cornerstone in scientific imaging, has undergone several major developments to meet evolving community needs, focusing on cross-language integration and modernization to enhance its utility and accessibility.

        Recent key initiatives include: upgrading Fiji to OpenJDK 21 with the new Jaunch launcher, which supports both JVM and Python runtimes; developing SciJava Ops for standardizing scientific algorithms; introducing the Appose library to facilitate deep learning integration via cross-language interprocess communication; and releasing the napari-imagej plugin to expose Fiji functionality within napari.

        The OpenJDK 21 upgrade enables many new JVM-based technologies, such as improved 3D visualization through the sciview plugin. The 1.0.0 release of SciJava Ops marks a milestone in fostering FAIR and extensible algorithms. The Appose library has enabled new plugins like SAMJ, leveraging deep learning methods such as Meta's Segment Anything Model. The napari-imagej plugin allows seamless integration between ImgLib2 and NumPy data structures, further extending Fiji's utility within the Python ecosystem.

        These advancements significantly enhance Fiji's interoperability, Python integration, and overall modernization, positioning it to better serve the evolving needs of the scientific imaging community.

        Speaker: Curtis Rueden (University of Wisconsin-Madison)
      • 45
        From pixels to insights: Learning morphological descriptors from cellular ultrastructure

        Quantitatively characterizing cellular morphology is a pivotal step in comprehending cellular structure and, by extension, cellular function. Electron microscopy (EM) and expansion microscopy (ExM) are complementary techniques that grant access to the intricate world of cellular ultrastructure. Nevertheless, the absence of automated, universally applicable frameworks for extracting morphological features poses a significant bottleneck in the analysis of the copious amounts of data generated by these methods. In this project, I propose to establish an unsupervised representation learning approach, where a neural network is trained to automatically extract morphological descriptors directly from 3D image data. Focusing solely on ultrastructure eliminates the need for an additional time-consuming and potentially infeasible segmentation step avoiding biases towards shape-based characteristics. Applying this approach to an ExM dataset of pan-labelled images from environmental samples aims to create a comprehensive atlas of cellular ultrastructure in eukaryotic microplankton, enabling precise characterization of the ultrastructural makeup of these samples and the identification of species directly from images. This innovative approach holds great promise in advancing our understanding of cellular morphology and streamlining its analysis across diverse imaging modalities.

        Speaker: Jonas Hellgoth (EMBL Heidelberg)
      • 46
        fsdb - an open-source bioimage analysis and (meta) data management framework

        For our high-content-screening service at TEFOR-Paris-Saclay, we developed a data-management and bioimage-analysis framework, which we share here with the bioimage-analysis-community.
        The filesystem-based-image-database (fsdb) is designed to help labs and core facilities manage, analyze, and archive big data while being invisible to the user of the data within.
        Using Linux’s authentication and permission system for regulating the access and smb to serve the data, the fsdb is built to run in heterogeneous environments. On connected computers, the fsdb appears as ‘just another hard drive’ containing data with structured but human-readable filenames. This enables the user to use any kind of hardware and software to query, study, and analyze the data.
        Working with thousands of big data-sets, we conceived the fsdb for consistently performing mundane tasks such as transferring data. As accessing big data across networks is tiresome, we taught the fsdb to generate ‘secondary data’, lightweight representations, which are quickly accessible across slow networks.
        After two complete rewrites and a couple of functionality extensions, we now present the fsdb23 at https://gitlab.com/arnimjenett/fsdb23 and welcome contributions and feedback.

        Speaker: Arnim Jenett (CNRS, TEFOR Paris-Saclay)
      • 47
        High-throughput in-vivo single-molecule imaging of DNA repair in E. coli

        DNA double-strand breaks (DSBs) can seriously threaten the integrity of the affected cell. To maintain its genetic information, the cell needs to repair DSBs as faithfully as possible.
        In Escherichia coli, the RecBCD complex recognises and processes DSBs to generate a single-stranded DNA coated with the RecA protein, which is used to search for a homologous repair template. However, little is known about the in-vivo dynamics of this process: how long does RecBCD stay bound to DNA? What triggers its dissociation? How does the cell react to different levels of DNA damage?

        To address these questions, we set-up a high-throughput, automated workflow for multi-colour imaging of RecBCD, RecA, and the bacterial nucleoid in live E. coli cells exposed to different concentrations of ciprofloxacin, a DSB-inducing antibiotic of clinical significance. The large quantity of data collected allowed us to gain insight into the rate of formation of DSBs by ciprofloxacin, as well as the in-vivo dynamics of RecBCD binding to DNA. In-depth understanding of the different steps of the DNA repair process in bacteria could help devise strategies against antibiotic resistance.

        Speaker: Daniel Thedie (University of Edinburgh)
      • 48
        High-throughput microscopy for deciphering the genetics of cell cycle diversity in wild yeasts

        Cell cycle regulation and cell size control in yeast is governed by complex protein interactions and can vary greatly depending on genetic disposition. A powerful tool to study cell cycle progression is high-throughput live-cell imaging where dynamic cellular processes can be observed for single cells over time. However, studies based on this technique are limited in scope due to the lack of fully automated analysis pipelines and the need for time-consuming manual annotation work. Hence, many cell-cycle related studies observe only bulk-level phenotypes and thereby lose out on valuable information.
        We developed a Deep Learning based pipeline for yeast cell segmentation, tracking and classification of cell cycle stages, which makes use of time-contextual information, outperforming comparable state-of-the-art methods, which are working on 2D input data. Using this pipeline, we could lift the bottleneck posed by data analysis and managed to generate a unique single-cell phenotypic dataset consisting of more than 23 thousand complete cell cycles, which are described by features like phase durations and mother-bud volume ratios at divisions. This was achieved fully automatically allowing us to further perform experiments to aim at a dataset consisting of more than 100 thousand cell cycles of the S. cerevisiae strains from a collection of genetically diverse strains.
        By analysing phenotypes of cell-cycle progression in the context of genomic features and environmental variables, we hope to 1) gain general insights into patterns of cell-cycle regulation across S. cerevisiae, for instance size sensing at the G1-S transition, 2) analyse domestication traces and their impact on cell-cycle related phenotypes and 3) discover candidate genes which might play still unknown parts in the regulation of cell growth and division.

        Speaker: Benedikt Mairhörmann (Helmholtz Munich)
      • 49
        Investigating the structural complexities of DNA using high resolution atomic force microscopy

        Recent developments investigating DNA topology have led to a wealth of knowledge regarding genome stability and cellular functions. The use of Atomic Force Microscopy (AFM) has had great success in imaging high-resolution DNA topology at the nanoscale, revealing insights into fundamental biological processes, which was not previously possible using traditional structural biology techniques. However, traditional AFM analysis methods are hindered significantly by the expertise required from the user alongside slow labour-intensive work. Thus, there is a demand for automated analysis software.
        We have developed ‘Topostats’, which is an open-source software aiming to process and analyse raw AFM data and to directly determine the topology of individual DNA molecules. Using this software, we are able to characterise structural properties of circular and linear DNA under different aqueous conditions, to give meaningful insights into their molecular biology. Moreover, we have also shown that via the use of this software we see a large reduction in image processing time and error, setting a benchmark for new AFM image analysis techniques.

        Speaker: Harriet Read (University of Sheffield)
      • 50
        Light-Insight: Spatiotemporal profiling of human early brain organoid development.

        We have established long-term light sheet imaging of human brain organoid development for the spatiotemporal profiling of developmental morphodynamics underlying human brain patterning and morphogenesis. Here, we present and analyze a novel dual-channel, multi-mosaic and multi-protein labeling strategy combined with a demultiplexing approach. To achieve this, we have developed Light-Insight, a novel computational workflow, leveraging morphological measurements and machine learning to demultiplex the different labelled fluorescent cell lines. We simultaneously profile and track Actin, Tubulin, plasma membrane, Histone, and LAMB1 dynamics. We apply Light-Insight to monitor week-long human brain organoid development and unravel the impact of extrinsically provided extracellular matrix. Our workflow enables quantifying cell morphologies, alignment changes and tissue scale dynamics during tissue state transitions including neuroepithelial induction, lumenization, and brain regionalization. We show that an extrinsically provided matrix enhances lumen expansion and cell elongation, while the intrinsic matrix or an encapsulation of the neural organoids in agarose have altered cell and tissue morphologies. Overall, Light-Insight and our imaging approach quantify tissue state transitions and can be applied to any healthy or perturbed developmental or organoid system.

        Speaker: Gilles Gut (ETH Zürich)
      • 51
        Machine learning based Evaluation and Enhancement (EVEN) for optical microscopy

        The translation of raw data into quantitative information, which is the ultimate goal of microscopy-based research studies, requires the implementation of standardized data pipelines to process and analyze the measured images. Image quality assessment (IQA) is an essential ingredient for the validation of each intermediate result, but it frequently relies on ground-truth images, visual perception, and complex quality metrics, highlighting the need for interpretable, automatic and standardized methods for image evaluation. We present a workflow for the integration of quality metrics into machine learning models to obtain automatic IQA and artifacts identification. In our study, a classification model is trained with no-reference quality metrics to discern high quality images and measurements affected by experimental artifacts, and it is utilized to predict the presence of artifacts and the image quality in unseen datasets. We present an application of our method to the Evaluation and Enhancement (EVEN) of uneven illumination corrections for multimodal microscopy. We show that our method is easily interpretable and that it can be used for the generation of quality rankings and the optimization of processed images.

        Speaker: Elena Corbetta (Friedrich-Schiller-Universität Jena)
      • 52
        Mars, a molecule archive suite for analysis of single molecule properties from bioimages

        The development of single-molecule imaging approaches to study cellular machineries reconstituted from purified components is generating more diverse and complex datasets. These datasets can reveal the dynamic motion of protein assemblies, exchange kinetics of individual components and landscape of pathways supporting essential cellular reactions. However, few solutions exist for processing and reporting these information-rich datasets, posing challenges for reproducibility and wider dissemination. To address this issue, we developed Mars, a collection of Fiji commands written in Java for common single-molecule analysis tasks using a Molecule Archive architecture that is easily adapted to multistep workflows. Mars supports interactive biomolecule feature exploration through a graphical user interface written in JavaFX that provides charts, scriptable dashboards, and interactive image views. To further enhance collaboration, our recent efforts have been focused on a major revision of the Mars data storage model to support cloud storage locations as well as the addition of new commands and workflows to track the shapes of objects. Mars provides a flexible solution for reproducible analysis of image-derived properties, facilitating the discovery and quantitative classification of new biological phenomena.

        Speaker: Karl Duderstadt (Technical University of Munich (TUM))
      • 53
        MicroSplit: Semantic Unmixing of Fluorescent Microscopy Data

        Fluorescence microscopy, a key driver for progress in the life sciences, faces limitations due to the microscope’s optics, fluorophore chemistry, and photon exposure limits, necessitating trade-offs in imaging speed, resolution, and depth. Here, we introduce MicroSplit, a deep learning-based computational multiplexing technique that enhances the imaging of multiple cellular structures within a single fluorescent channel, allowing faster imaging and reduced photon exposure. We show that MicroSplit efficiently separates up to four superimposed noisy structures into distinct, denoised channels. Using Variational Splitting Encoder-Decoder (VSE) networks, our approach can sample diverse predictions from a trained posterior possible solutions. The diversity of these samples scales with the uncertainty in a given input, allowing us to estimate the true prediction errors by computing the variability between samples. We demonstrate the robustness of MicroSplit across various datasets and noise levels and show its utility to image more, to image faster, and to improve various downstream analysis tasks.

        Speaker: Ashesh Ashesh (Human Technopole)
      • 54
        Morphodynamics of human brain organoid development

        Brain organoids enable mechanistic study of human brain development and provide opportunities to explore self-organization in unconstrained developmental systems. We have established long-term (~3 weeks) lightsheet microscopy on multi-mosaic neural organoids (MMOs) generated from sparse mixing of ulabelled iPSCs with fluorescently labeled human iPSCs (Actin-GFP, Histone-GFP, Tubulin-RFP, CAAX-RFP, and Lamin-RFP labels), which enables tracking of tissue morphology, cell behaviors, and subcellular features over weeks of organoid development. We developed an image analysis pipeline to segment and demultiplex multi-mosaic labels and use morphometrics to provide quantitative measurements of tissue and single-cell dynamics in organoid development. We combine single-cell transcriptomics and 2D spatial iterative immunostaining with 4D lightsheet imaging to identify cell morphotypes during tissue patterning and show that the organoids exhibit extracellular matrix (ECM) dependent tissue state transitions through lumenization and regionalization. The presence of an external ECM promotes cell polarization, and alignment to form the neuroepithelium. Finally, we track the nuclei and cell morphotypes and show that luminogenesis and telencephalic patterning in neural organoids depends on the ECM microenvironment and mechanosensation via the HIPPO/WNT signalling pathways.

        Speaker: Akanksha Jain (ETH Zürich)
      • 55
        napari-signal-classifier: Leveraging Interactive Temporal Features Annotation to Classify Signals and Events

        Quantitative analysis of biological phenomena is practically a requirement in contemporary research, particularly when dealing with image data. AI-assisted tools simplify complex tasks like image segmentation, even for those without computational expertise. Supervised machine learning excels in classifying data from minimal manual annotations. While several software solutions exist for timelapse data analysis, they are often domain-specific and limited in scope.
        To generalize Python-based image analysis, napari offers a flexible plugin engine. We recently presented napari-signal-selector, a plugin for interactive annotation of temporal features connected to image data. Building on this, we now introduce napari-signal-classifier, a plugin leveraging user annotations to classify signals using a Random Forest Classifier applied to a large set of signal features calculated by tsfresh. For shorter event classification, it integrates classical template matching for detection followed by subsequent event classification.
        This innovative tool empowers researchers to guide AI with their expertise, enhancing the accuracy and relevance of signal classification in biological data analysis.

        Speaker: Marcelo Leomil Zoccoler (Physics of Life (PoL) - TU Dresden)
      • 56
        Nellie: Automated organelle segmentation, tracking, and hierarchical feature extraction in 2D/3D live-cell microscopy Triulza Academy

        Triulza Academy

        The analysis of dynamic organelles remains a formidable challenge, though key to understanding biological processes. We introduce Nellie, an automated and unbiased pipeline for segmentation, tracking, and feature extraction of diverse intracellular structures. Nellie adapts to image metadata, eliminating user input. Nellie's preprocessing pipeline enhances structural contrast on multiple intracellular scales allowing for robust hierarchical segmentation of sub-organellar regions. Internal motion capture markers are generated and tracked via a radius-adaptive pattern matching scheme, and used as guides for sub-voxel flow interpolation. Nellie extracts a plethora of features at multiple hierarchical levels for deep and customizable analysis. Nellie features a Napari-based GUI that allows for code-free operation and visualization, while its modular open-source codebase invites customization by experienced users. We demonstrate Nellie's wide variety of use cases with two examples: unmixing multiple organelles from a single channel using feature-based classification and training an unsupervised graph autoencoder on mitochondrial multi-mesh graphs to quantify latent space embedding changes following ionomycin treatment.

        Speaker: Austin E. Y. T. Lefebvre (Calico Life Sciences LLC)
      • 57
        nlScript: A framework for creating natural-language-based user interfaces

        nlScript is a novel toolbox for creating unified scripting interfaces based on natural language in applications where a large number of configuration options renders traditional Graphical User Interfaces unintuitive and intricate. nlScript's concept is based on 3Dscript, where users describe in natural English sentences how 3Dscript’s rendering engine should animate volumetric microscope data sets in 3D. nlScript contains everything needed to implement similar interfaces for any application: An intuitive way to define the grammar for such a language, a dedicated editor that automatically infers auto-completion rules from the defined grammar, a parser and an environment for executing user-defined scripts. In contrast to language-based agents based on Large Language Models (LLMs), which are inherently non-deterministic due to unconstrained user input, the languages created with nlScript are deterministic and thus also suitable for controlling critical processes. We demonstrate nlScript's applicability in two example applications (1) to configure flexible time-lapse imaging experiments on a commercial microscope and (2) to automatically and reproducibly generate screen recordings using our new software 'Screenplay'. nlScript is natively available in Java, Python and JavaScript.

        Speakers: Benjamin Schmid (OICE, Friedrich-Alexander-Universität Erlangen-Nürnberg), Ralph Palmisano (Optical Imaging Competence Centre Erlangen, FAU OICE)
      • 58
        Phenomic data exploration guides drug discovery in a human disease network

        By integrating multi-omics data into logical graph representations, systems biology aims to model biological interactions to be studied in health and disease. In this project, a multi-layered heterogeneous network was built from data made available in public databases, relating diseases and compounds to the human interactome. Morphological profiles of compound-treated cells were also included in the network, stating the case for phenomic data contributing to network-based drug discovery. Diseases, genes and compounds were linked via documented associations or similarity measurements, and this structural and/or functional closeness within the network was exploited by a random walk with restart algorithm to predict new links in the network. A two-step random walk approach for the discovery of new links was implemented that aims at minimizing time and computational load spent on exploring pathways of lesser interest and/or probability of inferring new links. Four ligands predicted against a currently undruggable receptor related to cancer signaling and pathogen retention were tested in silico and resulted in favorable docking scores.

        Speaker: Alba Guembe Mülberger (Universitat Autònoma de Barcelona)
      • 59
        Pollen to pixels: perception of heat stress by machine learning to predict fertility

        The increasing frequency of heatwaves we are experiencing at present strongly affects plant fertility and crops yields. While the whole plant suffers during hot periods, pollen development is especially sensitive. Our project aims to understand how some plants cope better than others with elevated temperatures. For this, we screened plants from populations with a diverse genetic background using histochemical staining, however, manual analysis of light microscopy images was inefficient. To address this, we developed a high-throughput automated image analysis method.
        We used Cellpose for image segmentation and created a custom Fiji-pipeline for batch analysis. This pipeline pre-processes images and measures various features such as quantity, size, and staining intensity of pollen grains. From the resulting database it proved challenging to interpret viability by looking at individual parameters. Therefore, we trained a machine learning model using a manually classified subset of data taking into account all parameters at once. This model accurately predicts pollen viability, enhancing our ability to screen for heat-resistant species.
        Our pipeline, using traditional Fiji-based analysis sandwiched between two machine learning methods, greatly improves our screening process.

        Speaker: Daan van den Brink (Radboud University)
      • 60
        pyCyto: A Pythonic Cytotoxicity Analysis Pipeline

        In this work, we present a comprehensive Python tool designed for automated large-scale cytotoxicity analysis, focusing on immune-target cell interactions. With the capacity to handle microscopic imaging with a 24-hours imaging duration of up to 100,000 cells in interaction per frame, pyCyto offers a robust and scalable solution for high-throughput image analysis. Its architecture comprises four integrated layers: pythonic classes, a command-interface (CLI), YAML pipeline control, and a SLURM distributed version. The pythonic classes provide a flexible and object-oriented foundation that wraps over extensive task-specific libraries for image preprocessing, deep learning-based segmentation, cell tracking, GPU-accelerated contact and killing analysis. The CLI enables interaction with individual steps via terminals, hence enabling scalable and repeatable batch bioimage analysis. To enable a streamlined end-to-end analysis workflow, pyCyto includes a YAML pipeline for high-level configuration that facilitates analysis configurations and management of complex steps in human-readable file format. In bundle with SLURM distributed version, the YAML pipeline control file is compatible with computation devices at different scales that offer maximal scalability from edge computing to high-performance computing (HPC) clusters, significantly enhancing data throughput and analysis efficiency. The package is originally designed to support multi-channel fluorescent microscopy images, with extensible adaptability for various live imaging modalities, including brightfield, confocal, lightsheet and spinning disk microscopes. The seamless integration of various stages of bioimage analysis ensures detailed cellular activities assessment, making it an invaluable tool for researchers in immunology and cell biology.

        Software available on Github: https://github.com/bpi-oxford/Cytotoxicity-Pipeline

        Speaker: Jacky Ka Long Ko (University of Oxford)
      • 61
        pymmcore-plus: a pure python way to control your microscope with Micro-Manager

        We are excited to introduce the community to pymmcore-plus, a new package for controlling microscopes through the open-source software Micro-Manager within a pure Python environment. pymmcore-plus is an extension of pymmcore, the original Python 3.x bindings for the C++ Micro-Manager core and, as such, it operates independently of Java, eliminating the need for Java dependencies. A key feature is its multi-dimensional acquisition engine implemented in pure Python that facilitates "on-the-fly" image processing and image analysis and enables "smart microscopy" capabilities. Since pymmcore-plus does not rely on the Java Graphical User Interface (GUI), we also developed a related package named pymmcore-widgets which provides a collection of Qt-based widgets that can be used in combination with pymmcore-plus to create custom GUIs (e.g. napari-micromanager).

        Speaker: Federico Gasparoli (Harvard University)
      • 62
        Quantification of microtubule-guided peroxisome migration using a hidden Markov chain model

        The migration of molecules and cellular organelles is essential for cellular functions. However, analysing such dynamics is challenging due to the high spatial and temporal resolution required and the accurate analysis of the diffusional tracks. Here, we investigate the migration modes of peroxisome organelles in the cytosol of living cells. Peroxisomes predominantly migrate randomly, but occasionally, they bind to the cell's microtubular network and perform directed migration. So far, an accurate analysis of switching between these migration modes is missing. At a high acquisition rate, we collect temporal diffusion tracks of thousands of individual peroxisomes in the HEK-293 cell line using spinning disc fluorescence microscopy. We use a Hidden Markov Model (HMM) to i) automatically identify directed migration in the tracks and ii) quantify the migration properties to compare different experimental conditions. Comparing different cellular conditions, we show that the knockout of the peroxisomal membrane protein PEX14 leads to decreased directed movement due to a lowered binding probability to the microtubule. Structural changes in the microtubular network explain Nocodazole-treatment's perceived eradication of directed movement.

        Speaker: Carl-Magnus Svensson (Leibniz-HKI)
      • 63
        Quantifying intra-tumoral molecular subtype heterogeneity in MIBC from histological slides using a deep learning approach

        An intriguing question in cancer biology is the complex relationship between molecular profiles, as provided by transcriptomics, and their phenotypic manifestations at the cellular and the tissue scale. Understanding this relationship will enable us to comprehend the functional impact of transcriptomic deregulations and identify potential biomarkers in the context of precision medicine.
        One way to address this question is to investigate to what extent the transcriptome at the tissue level can be predicted from purely morphological data, such as Whole Slide Images (WSI), Gigapixel images of stained tissue sections. Deep Learning models trained to predict an overall patient-level transcriptomic profile from WSIs often lack specificity due to the multitude of different tissue types in a sample and intra-tumoral heterogeneity.
        Here, we present a study where we predict molecularly defined subtypes of Muscle-Invasive Bladder Cancer (MIBC) exhibiting intra-tumoral molecular subtype heterogeneity (VESPER trial; N=417). For this, we selected homogeneous regions in the WSI and performed RNA-seq for these regions. WSI are very large and need to be subdivided into smaller images (tiles) that can be processed by Deep Neural Networks. We designed a two-step workflow involving proxy labeling to learn tile predictions with region-based RNAseq as ground truth. The predictions of a first model are used as proxy labels for a second model, following a ""clean"" tile filtering step. This refined model was then applied to whole slides to generate subtype heterogeneity maps.
        Our model predicted consensus molecular subtypes with a ROC AUC of 0.89, demonstrating that phenotypic manifestations predict underlying transcriptomic deregulation. Subtype maps revealed diverse heterogeneity profiles, quantified as the percentage of tumor tiles assigned to each subtype.

        Speaker: Alice Blondel (Centre for Computational Biology (CBIO), Mines Paris, PSL University)
      • 64
        Run-length based mathematical morphology for efficient processing of large 3D images

        "Biological imaging often generates three-dimensional images with a very large number of elements. In particular, microtomography results in images of several Giga-Bytes. The analysis of such data requires fast and efficient image processing software solutions, especially when user interaction is necessary. In some cases, algorithmic methods have been proposed but are seldom implemented within image processing software. We present here the use of run-length encoding for morphological processing of large binary images, and its application to plant biology.
        Run-length encoding allows to represent binary images with by reducing memory footprint. Moreover, efficient algorithms for morphological dilation and erosion have been proposed that take advantage of the encoding, drastically reducing computation time. We also investigated its application to morphological reconstruction.
        The methods were implemented and integrated within a graphical user interface. This allowed to apply a semi-automated image processing workflow of large 3D images of wheat grains acquired by synchrotron X-ray microtomography. Further work will consider the extension of the method to the processing of large label map images.

        Speaker: David Legland (INRAE)
      • 65
        Segmentation of budding yeast organelles from bright field time-lapses

        Budding yeasts growing in micro-fluidic devices constitute a convenient model to study how eukaryotic cells respond and adapt to sudden environmental change. Many of the intracellular changes needed for this adaptation are spatial and transient — they are observed in the dynamic localization of proteins from one sub-cellular compartment to another. These dynamics can be captured experimentally using fluorescence time-lapse microscopy by tagging both the proteins of interest and they compartments that we expect them to occupy, and measuring co-localization. Experimentally, the number of tags in a single live cell is limited by phototoxicity, available spectral space, and production cost. To observe the relative dynamics of multiple proteins at once, it would therefore be convenient to replace the need for compartment markers without compromising on obtaining quantitative, rather than qualitative, data. Here, we discuss the use of image-to-image translation to replace the compartment tags with deep learning-based predictions obtained from bright field images. We compare the relative difficulty of predicting different compartments from bright field images, and compare the protein dynamics obtained from predictions to those obtained using standard co-localization measurements. Finally, we provide a case-study of glucose starvation to demonstrate the usefulness of such a setup.

        Speaker: Diane Adjavon (HHMI Janelia/University of Edinburgh)
      • 66
        Semi-automatic tracing and analysis of neurons in Brainbow images

        Reconstructing the connectome is a complex task, due to the diversity of neuronal forms and the density of the environment.
        At LOB, in collaboration with IDV, we have developed a microscope combining multicolor two-photon excitation by wavelength mixing and serial block acquisition. This imaging method (Chroms), applied to a Brainbow retrovirus-labeled mouse brain, provides several millimeters of a multichannel, 3D submicron brain to track thousands of independent axons in one image.
        We developed a specific semi-automatic quantization method for these dense, multi-channel data. A napari interface enables one to specify a neuron's starting position, then by successive local iterations, the complete neuron is automatically traced. Local tracing is performed in two stages: a U-Net 3D artificial neural network trained on a manual ground truth is used to filter only the neuron of interest, then traced using APP2, a classical tracing algorithm. The napari interface enables tracing errors to be corrected and visualized live.
        These developments in automatic Brainbow image analysis are part of the napari ecosystem, and will enable a quantitative understanding of the development and complexity of the connectome.

        Speaker: Clément Caporal (Ecole Polytechnique/CNRS)
      • 67
        Shaping progress in biomedical image processing with project-based learning

        Methods in biomedical image processing are changing very fast due to the high attention to computer vision and machine learning. Keeping up with trends and testing interesting routes in biomedicine is therefore very challenging. With this report, I would like to showcase a project-based teaching scheme that focuses on a single biomedical topic and aims to create progress in this field. Here, we focus on detecting and identifying dendrites and dendritic spines in light microscopy data. We evaluate how foundation models can help in semantic segmentation, custom distance losses ensure real anatomical constraints, and determine if weakly supervised and human-in-the-loop training strategies can improve the adaptation to novel data. By testing the students' abilities to annotate dendritic spines before and after the three-month interval, we determine inter- and intra-rater reliabilities of naive, non-expert annotators and compare them to existing expert annotations. This allows us not only to advance the field in terms of biomedical image processing, but also to highlight the importance of gaining knowledge by handling and interacting with the data.

        Speaker: Andreas Kist (Friedrich-Alexander-University Erlangen-Nürnberg)
      • 68
        TopoStats: taking AFM analysis to new heights

        High-resolution atomic force microscopy (AFM) provides unparalleled visualisation of molecular structures and interactions in liquid, achieving sub-molecular resolution without the need for labelling or averaging. This capability enables detailed imaging of dynamic and flexible molecules like DNA and proteins, revealing their own conformational changes as well as interactions with one another. Despite its powerful potential, AFM’s application in the biosciences is hindered by challenges in image analysis, including inherent imaging artefacts and complexities in data extraction. To address this, we developed TopoStats, an open-source Python package for high-throughput processing and analysis of raw AFM images. TopoStats provides an automated pipeline for file loading, image filtering, cleaning, segmentation, and feature extraction, producing clean, flattened images and detailed statistical information on single molecules. We showcase TopoStats’ capabilities by demonstrating its use for automated quantification across a range of samples - from DNA and DNA-protein interactions to larger-scale materials science applications. Our aim is for TopoStats to significantly enhance the quality of AFM data analysis, support the development of robust and open analytical tools, and contribute to the advancement of AFM research worldwide.

        Speaker: Laura Wiggins (University of Sheffield)
      • 69
        Unsupervised Model Selection Through Test Time Perturbation Consistency

        As a biologist with a new dataset the cost of labelling and training a new ML model is prohibitive to downstream analysis. One solution is to utilise pre-trained networks in a transfer learning approach. Community efforts such as the BioImage-Model-Zoo have increased the availability of pre-trained models. However, how should the best pre-trained model for a particular task and dataset be selected? Qualitative approaches comparing datasets are common practice, but unreliable, as imperceptible differences in dataset distributions impact transfer success and they fail to consider the properties of the model itself. A quantified approach factoring in both datasets and models would enable a more systematic selection process, facilitating more effective reuse of existing models. I will present preliminary work on an unsupervised transferability heuristic that aims to rank pre-trained models with respect to their direct transfer performance on a target dataset. The approach centres on probing model prediction consistency to test time perturbations of the target data. Thus providing a conveniently obtainable unsupervised consistency score that correlates with model performance, allowing for suitable model selection for the target task.

        Speaker: Joshua Talks (EMBL)
      • 70
        Untangling spaghetti: using BigTrace plugin to analyze 3D filaments in time and space

        Recent new developments in microscopy image analysis are heavily biased towards blob-like structures (cells, nuclei, vesicles), while tools for the exploration of filaments or curvilinear objects (spines, biopolymers, neurites, vessels, cilia) are often underrepresented. To address this gap, we developed the BigTrace plugin for Fiji, powered by the imglib2 library and using BigVolumeViewer for the visualization. It works with arbitrary large (and small) 3D multi-channel, timelapse images and allows lazy loading from proprietary formats and on-the-fly deskewing. BigTrace semi-automatically traces filament structures and performs spline interpolation and extraction of underlying volumetric intensity data for analysis. Its application will be illustrated be three user cases. First is the tracing of microtubules in the dense arrays of cardiovascular cells obtained using expansion microscopy. The second is the analysis of cytoplasmic bridges in the volumetric timelapse lattice light-sheet recordings of 3D dividing cells. The last one is the extraction of basal bodies and cilia from in situ organoid cultures of airway epithelia for the aim of protein mapping using averaging of 3D volumes.

        Speaker: Eugene A. Katrukha (Utrecht University)
      • 71
        User-oriented tools to characterize epithelia dynamics

        Epithelial tissue dynamics is tightly controlled spatio-temporally in numerous developmental processes and this precision is essential for the correct formation and homeostasis of organs. Quantifying epithelia dynamics is an important step in understanding this regulation but usually involves huge movies spanning thousands of cells and several hours. Thus, efficient and user-friendly tools are required to assist biologists to extract quantitative information from such big data. In this context, I will present three important pipelines: DeXtrusion, an open-source pipeline based on recurrent neural networks to automatically detect rare cellular events, LocalZBackProj, a python utility to recover 3D cell positions from the 2D projections, and EpiCure, a napari-based plugin to ease manual correction of segmentation and tracking of epithelia. The main focus of our tools, developed with back-and-forth interactions with the users in our institute, is to provide user-friendly tools to reduce manual annotation time and propose easy and interactive visualization and quantification.

        Speaker: Gaëlle Letort (CNRS/Institut Pasteur)
      • 72
        Using deep learning on single-cell images to unlock novel disease signatures and candidate therapeutics

        The lack of screenable phenotypes in scalable cell models has limited progress in drug discovery and early diagnostics for complex diseases. Here we present a novel unbiased phenotypic profiling platform that combines high-throughput cell culture automation, Cell Painting, and deep learning. We built various models to extract meaningful features at single cell level, including deep learning embeddings, and fixed measurements extracted using our in-house tool ScaleFExSM. Using these features, we leveraged different aggregation levels to highlight phenotypes hidden by cell and donor variation as well as other known confounders. Cells were then characterized and phenotyped to deliver interpretable outputs. We applied our platform to primary fibroblasts and iPSC-derived neurons from large cohorts of disease-affected donors and carefully matched controls. The pipeline was also used to characterize drug shifts and their effects on diseases of interest to see whether they had a beneficial effect on the affected cells. Combined with the large cell line repository available at NYSCF, the presented platform holds great potential to uncover morphological signatures of different diseases and conditions to advance precision drug discovery.

        Speaker: Bianca Migliori (The New York Stem Cell Foundation)
      • 73
        Using Nextflow for scalable and reproducible batch image analysis

        Deriving scientifically sound conclusions from microscopy experiments typically requires batch analysis of large image data sets. It is critical that these analysis workflows are reproducible. In addition, it is advantageous of the workflows are scalable and can be readily deployed on high performance compute infrastructures.
        Here, we will present how the established Nextflow workflow management system can be used in the context of bioimage analysis. We will detail several advantages, such as combining analysis tools developed in different programming languages, using conda and containers for the reproducible deployment of the analysis steps, using the Nextflow reporting tools for workflow optimisation, leveraging inbuilt error handling strategies, and convenient deployment either locally and on a slurm managed computer cluster. Finally we will discuss how the nf-core specifications could allow our community to develop a modular analysis ecosystem with shareable tools and workflows.

        Speaker: Christian Tischer (EMBL)
      • 74
        Interpreting Microscopy Images with Machine Learning

        Machine learning (ML) is becoming pivotal in life science research, offering powerful tools for interpreting complex biological data. In particular, explainable ML provides insights into the reasoning behind model predictions, highlighting the data features that drove the model outcome. Our work focuses on building explainable ML models for microscopy images. These models not only classify cell fates but also reveal the underlying data patterns and features influencing these classifications. Specifically, we have developed models to classify individual lung cancer cell fates, such as proliferation and death, from live-cell microscopy data. By leveraging explainable ML techniques, we gained insights into the decision-making process of these models, revealing the key cellular features that determine whether a cell would proliferate or die. The combination of ML and specialised image acquisition will enable us to address specific biological questions and uncover novel insights about underlying cellular mechanisms. This work demonstrates the potential of explainable ML in enhancing our understanding of complex biological processes, and how we can gain novel knowledge from images.

        Speaker: Inês Martins Cunha (SciLifeLab, University of Stockholm)
      • 75
        A Bayesian solution to count the number of molecules within a diffraction limited spot HT & Triulza Academy

        HT & Triulza Academy

        Flash Talk - HT Auditorium Poster Session - Triulza Academy

        We propose a new method of molecular counting, or inferring the number of fluorescent emitters when only their combined intensity contributions can be observed. This problem occurs regularly in quantitative microscopy of biological complexes at the nano-scale, below the resolution limit of super-resolution microscopy, where individual objects can no longer be visually separated. Our proposed solution directly models the photo-physics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model. Given an ROI of a time series image containing an unknown number of emitters, our model jointly estimates the parameters of the intensity distribution, their blinking rates, as well as a posterior distribution of the total number of fluorescent emitters. We show that our model is consistently more accurate and increases the range of countable emitters by a factor of two compared to current state-of-the-art methods, which count based on autocorrelation and blinking frequency. Furthermore, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability, and therefore can inform experimental conditions that will maximize counting accuracy.

        Speaker: Alexander Hillsley (Howard Hughes Medical Institute)
      • 76
        An Image Analysis Pipeline for Quantifying the Spatial Distribution of Fluorescently Labeled Cell Markers in Stroma-Rich Tumors

        Dense, stroma-rich tumors with high extracellular matrix (ECM) content are highly resistant to chemotherapy, the standard treatment. Determining the spatial distribution of cell markers is crucial for characterizing the mechanisms of potential targets. However, an end-to-end computational pipeline has been lacking. Therefore, we developed a robust image analysis pipeline for quantifying the spatial distribution of fluorescently labeled cell markers relative to a modeled stromal border. This pipeline stitches together common models and software: StarDist for nuclei detection and boundary inference, QuPath’s Random Forest model for cell classification, pixel classifiers for stromal region annotation, and signed distance calculation between cells and their nearest stromal border. We also extended QuPath to support sensitivity analysis to ensure result consistency across the parameter space. Additionally, it supports comparing classification results to statistically propagated expert prior knowledge across images. Notably, our pipeline revealed that the signal intensity of Ki67 and pNDRG1 in cancer cells peaks around the stromal border, supporting the hypothesis that NDRG1, a novel DNA repair protein, directly links tumor stroma to chemoresistance in pancreatic ductal adenocarcinoma. The codebase, image datasets, and results are available at https://github.com/HMS-IAC/stroma-spatial-analysis.

        Speaker: Antoine Ruzette (Harvard Medical School)
    • 16:30
      🚶‍♀️ Walking time to HT From Triulza Academy to HT

      From Triulza Academy to HT

      Time to walk back from Triulza Academy to HT

    • 💻 Workshop Session #2 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 77
        Advancements in Cell Tracking: The Mastodon Solution for Large-Scale Datasets Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        Modern microscopy technologies, such as light sheet microscopy, enable in toto 3D imaging of live samples with high spatial and temporal resolution. These images will be in 3D over time and may include multiple channels and views. The computational analysis of these images promises new insights into cellular, developmental, and stem cell biology. However, a single image can amount to several terabytes. Consequently, the automated or semi-automated analysis of these large images can generate a vast amount of annotations. The challenges lie in dealing with very large images and managing the large volume of annotations derived from these images and make the interaction and analysis of the data particularly difficult. In this workshop, participants will learn how to use Mastodon, an open access and user-friendly large-scale tracking and track-editing framework designed to tackle the challenges of tracking in Big Data.

        Speaker: Johannes Girstmair (MPI-CBG)
      • 78
        Building Your Own Chatbot for BioImage Analysis Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        "The BioImage.IO Chatbot (https://github.com/bioimage-io/bioimageio-chatbot) is a chat assistant we created to empower the bioimaging community with state-of-the-art Large Language Models. Through the help of a series of extensions, the BioImage.IO Chatbot is able to query documentation and retrieve information from online databases and image.sc forum, as well as generating code and executing bioimage analysis tasks. To make it even more useful for the community, we developed an extension mechanism for the chatbot to interface with the user's own tools. Custom chatbots can be created by developing extensions for your own specific tasks, such as controlling a microscope or running a bioimage analysis workflow.

        During the workshop, participants will engage in the practical aspects of chatbot extension development, learning to tailor the chatbot’s functionalities to meet the specific needs of diverse research projects. This approach democratizes the application of complex computational tools, encouraging a collaborative environment where researchers can both share and enhance bioimage analysis techniques."

        Speaker: Caterina Fuster Barceló (Universidad Carlos III de Madrid)
      • 79
        FAIR IPA - A Project Template Upper Egg Room - Small (Human Technopole)

        Upper Egg Room - Small

        Human Technopole

        We will introduce the IPA Project Template, which offers a structured way to organize image processing and analysis projects. Our template is designed to support users throughout their projects, providing space for experimentation, organizing proven methods into reproducible steps, tracking processing runs, and facilitating documentation.
        During the workshop participants will create their own IPA project and learn the basics of the FAIR principles w.r.t. image processing and analysis.

        Speakers: Jan Eglinger (Friedrich Miescher Institute for Biomedical Research, FMI), Tim-Oliver Buchholz (Friedrich Miescher Institute for Biomedical Research)
      • 80
        Introduction to Piximi Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        In this workshop, we will demonstrate Piximi, an images-to-discovery web application that allows users to perform deep learning without installation. We will demonstrate Piximi's ability to load images, run segmentation, perform classifications, and create measurements, all without the data leaving the user's computer.

        Speaker: Nodar Gogoberidze (Broad Institute)
      • 81
        QuPath for Python Programmers HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        QuPath is a popular, open-source platform for visualizing, annotating and analyzing complex images - including whole slide and highly multiplexed datasets. QuPath is written in Java and scriptable with Groovy, which makes it portable, powerful, and... sometimes a pain if you'd rather be working in Python (sometimes we would too).

        This workshop will show how QuPath and Python can work happily together, and how Python programmers can benefit from incorporating some QuPath into their skillset. It is especially relevant for deep learning afficionados who want to use QuPath for annotation and visualization, as well as for existing QuPath users who want to dig deeper into their data with Jupyter notebooks.

        Speaker: Allan O'Callaghan (University of Edinbrugh)
      • 82
        Segment Anything for Microscopy PIT.P01.026 (Human Technopole)

        PIT.P01.026

        Human Technopole

        The workshop will show how Segment Anything, a deep learning model for interactive instance segmentation, can applied to segmentation and annotation tasks in biomedical images. We will first give an overview of different approaches that apply this method in the biomedical domain. Then we will introduce our tool, Segment Anything for Microscopy,
        which introduces specialized models for microscopy and provides a napari plugin for automatic and interactive data annotation.
        The main part of the workshop will then consist of a hands-on tutorial that teaches the participants how to use the tool. We will first work on example data and at the end of the workshop give the opportunity to work on participant data.

        Speakers: Anwai Archit (University of Göttingen), Constantin Pape (University of Göttingen), Luca Freckmann (University of Göttingen), Sushmita Nair (University of Goettingen)
      • 83
        The Spatial Transcriptomics as Images Project (STIM) PIT.P01.011 (Human Technopole )

        PIT.P01.011

        Human Technopole

        The Spatial Transcriptomics as Images Project (STIM) is a framework for storing, interactively viewing & 3D-aligning, as well as rendering sequence-based spatial transcriptomics data (e.g. Slide-Seq), which builds on the powerful libraries Imglib2, N5, BigDataViewer and Fiji (https://github.com/PreibischLab/STIM). In contrast to the ""classical"" sequence analysis space, STIM relies on scalable, tried-and-tested image processing solutions to work with spatial sequencing data by treating the data as an image consisting of irregularly-spaced, high-dimensional (each gene is treated as a channel) pixels/measurements.

        In the workshop we will teach how to import, view, align the data using command line tools and Fiji as well as interactive tools that build on BigDataViewer. We will start with a small toy dataset and show how to work with larger data.

        Speakers: Michael Innerberger (Janelia Research Campus), Stephan Preibisch (HHMI Janelia)
      • 84
        TopoStats: a tool for automated processing and quantification of AFM images PIT.P02.029 (Human Technopole)

        PIT.P02.029

        Human Technopole

        Our team has created TopoStats, an open-source Python toolkit for automated processing and analysis of Atomic Force Microscopy (AFM) images. We wish to provide a hands-on session to guide users through the TopoStats workflow, from raw AFM file formats through to molecule segmentation, quantification and biological interpretation. We will demonstrate how users can install TopoStats and discuss different parameters that can be configured to suit users own analysis. Attendees will be provided with comprehensive user guides and additional training materials to aid understanding.

        Speaker: Laura Wiggins (University of Sheffield)
    • ✨ Invited Speaker: Title to be announced (Sven Dorkenwald, Shanahan Foundation Fellow, Allen Institute) HT Auditorium

      HT Auditorium

    • 09:30
      🚶‍♀️ Shuffle time Human Technopole

      Human Technopole

      Time to move from the Auditorium to the meeting rooms for the Workshop Session

    • 💻 Workshop Session #3 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 85
        DaCapo: a modular deep learning framework for scalable 3D image segmentation Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        DaCapo is a deep learning library tailored to expedite the training and application of existing machine learning approaches on large, near-isotropic image data. It addresses the need for scalability and 3D-aware segmentation networks, and is developed with a modular and open-source framework for training and deploying deep learning solutions at scale. DaCapo breaks through terabyte-sized (teravoxels) segmentation ceilings by incorporating established segmentation approaches with blockwise distributed deployment using local, cluster, or cloud deployment infrastructures. The different aspects of DaCapo's functionality have been separated and encapsulated into submodules that can be selected depending on a user's specific needs. These submodules include mechanisms particular to the task type (e.g., semantic vs. instance segmentation), neural network architecture (including pretrained models), compute infrastructure and paradigm (e.g., cloud or cluster, local or distributed), and data loading, among others.

        Speakers: David Ackerman (Janelia Research Campus), Jeff Rhoades (HHMI Janelia)
      • 86
        Deformable 2D and 3D big data image registration and transformation with BigWarp PIT.P01.026 (Human Technopole)

        PIT.P01.026

        Human Technopole

        BigWarp is an intuitive tool for non-linear manual image registration that can scale to terabyte-sized image data. In this workshop, participants will learn to perform image registration with BigWarp, apply transformations to large images, import and export transformations to and from other tools, and fine-tune the results of automatic registration algorithms. BigWarp makes heavy use of the N5-API to store and load large image and transformation data and meta-data using the current NGFF formats HDF5, Zarr, and N5. In addition to basic usage, the concepts, tips, and best practices discussed will extend to other registration tools and help users achieve practical success for realistic and challenging data. Users will also get an introduction into an excellent use case for OME-NGFF.

        Speaker: John Bogovic (HHMI Janelia)
      • 87
        Introduction to CellProfiler PIT.P02.029 (Human Technopole)

        PIT.P02.029

        Human Technopole

        In this workshop, we will introduce users to CellProfiler, a software that allows you to create reproducible image analysis workflows without knowing how to code. Users will be introduced to how to input multi-channel data into CellProfiler, how to perform image processing, object finding, and measurement steps, as well as how to create reproducible outputs.

        Speaker: Paula Llanos (Broad Institute)
      • 88
        Multiplexed tissue imaging: tools and approaches Upper Egg Room - Small (Human Technople)

        Upper Egg Room - Small

        Human Technople

        In this workshop, we present computational tools, workflows and pipelines for the processing and analysis of multiplexed tissue imaging data. In the first part of the workshop, we discuss commonly used analysis strategies and present a set of existing pipelines for multiplexed image processing, including MCMICRO (Schapiro et al., 2022), Steinbock (Windhager et al., 2023) and PIPEX (https://github.com/CellProfiling/pipex). In the second part, we showcase selected analyses of spatially resolved single-cell data extracted from multiplexed tissue images in a hands-on fashion, demonstrating tools such as SpatialData (Marconato et al., 2024) and TissUUmaps (Pielawski et al., 2023). The workshop is intended to provide researchers with an overview of recent tools in the multiplexed imaging field, enabling them to choose a suitable approach for the analysis of their own multiplexed tissue imaging data.

        Speakers: Agustin Corbat (SciLifeLab), Anna Klemm (SciLifeLab), Frederic Ballllosera Navarro (Stanford University), Jonas Windhager (SciLifeLab), Kristína Lidayová (SciLifeLab)
      • 89
        Open-source bio-image analysis using surface meshes Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        In some cases, meshes have advantages over the use of voxels for image analysis (lightweight representation, surface features, ...)
        In this workshop, participants will learn:
        - How to pass from a voxels representation to a mesh, and vice-versa.
        - How to use Blender and Napari as well as some dedicated Python modules to filter, process or extract measures from meshes.

        Speaker: Clément Benedetti (CNRS)
      • 90
        PlantSeg 2.0: powerful and user-friendly tissue segmentation PIT.P01.011 (Human Technopole)

        PIT.P01.011

        Human Technopole

        The workshop will introduce PlantSeg 2.0: a new version of the popular PlantSeg tool for segmentation of cells in tissues based on membrane staining, in 2D and in 3D. Together with the participants, we will step through the new napari-based GUI, learning to choose pre-trained networks, set the parameters, correct the results and launch headless jobs. No prior experience with PlantSeg is needed.

        Speakers: Lorenzo Cerrone (University of Zurich), Qin Yu (EMBL Heidelberg)
      • 91
        Reproducible Scientific Figure Creation with Fiji/ImageJ and Inkscape HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        We - Jerome Mutterer and Jan Brocher - would like to introduce participants in this joint hands-on workshop to discover the flexible potential of ImageJ or Fiji alongside Inkscape for crafting scientific figures, slides, or posters. Our aim is not only to effectively incorporate traditional elements such as scale bars or labels but also to integrate reproducible content whenever possible. The workshop demonstrates simple figure panel creation steps in combination with additional scientific data visualizations, accompanied by the necessary code (from ImageJ, Python, or R) for their reproducible creation. By embedding code as well as metadata directly within the figure document, we ensure traceability of image processing methods and simplify the reproduction of identical visualizations with new datasets. Furthermore, we prioritize data preservation and demonstrate how to uphold best image quality and data integrity.

        Speakers: Jan Brocher (BioVoxxel), Jerome Mutterer (IBMP)
      • 92
        Versatile Deep Learning for 2D and 3D Bioimage Analysis with CytoDL Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        This hands-on workshop will introduce you to CytoDL, a powerful deep learning framework developed by the Allen Institute for Cell Science. CytoDL is designed to streamline the analysis of biological images, including 2D and 3D data represented as images, point clouds, and tabular formats. We will cover (A) Getting single cell and nucleus instance segmentations from image datasets from the Allen Institute from Cell Science [1, 2], and (B) Use the single cell images from (A) to extract unsupervised features to detect morphological perturbations of intracellular structures [2]. References: [1] - Viana, Matheus P., et al. ""Integrated intracellular organization and its variations in human iPS cells."" Nature 613.7943 (2023): 345-354. [2] - Donovan-Maiye, Rory M., et al. ""A deep generative model of 3D single-cell organization."" PLOS Computational Biology 18.1 (2022): e1009155.

        Speaker: Matheus Palhares Viana (Allen Institute for Cell Science)
    • 11:15
      ☕ Coffe break HT Auditorium / Covered Piazza

      HT Auditorium / Covered Piazza

    • ✨ Invited Speaker: Title to be announced (Tingying Peng, Helmholtz Zentrum München) HT Auditorium

      HT Auditorium

    • 🔬 Sponsor Talk: ZEISS Microscopy Software Ecosystem for Image Acquisition and Analysis HT Auditorium

      HT Auditorium

      • 93
        ZEISS Microscopy Software Ecosystem for Image Acquisition and Analysis HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        Speaker: Francesco Biancardi (ZEISS)
    • 12:30
      🚶‍♀️ Walking time to Triulza Academy From HT to Triulza Academy

      From HT to Triulza Academy

      From HT to Triulza Academy

      Time to walk to Triulza Academy (venue for lunch and poster session)

    • 13:00
      🍝 Lunch Triulza Academy

      Triulza Academy

    • 👩‍🏫 Poster Session #2 Triulza Academy

      Triulza Academy

      • 94
        **multiview-stitcher: a modular and extensible toolbox for scalable image registration and fusion in python

        Multi-view and multi-tile imaging offer great potential for enhancing resolution, field of view, and penetration depth in microscopy. Here we present multiview-stitcher, a versatile and modular python package for 2/3D image reconstruction that leverages registration and fusion algorithms readily available within the ecosystem for efficient use within an extensible and standardized framework. A key feature consists of scalability to large datasets, which is achieved by means of chunk-aware processing with support for full affine spatial transformations. Multiview-stitcher provides interoperable user interfaces in the form of a modular python API for use as a library and a napari plugin for interactive stitching of image layers. Importantly, it allows users to quickly adopt and compare the results of applying different reconstruction modalities within custom workflows. To demonstrate the usability and versatility of multiview-stitcher we showcase its use for reconstructing multi-view and multi-tile (light sheet) acquisitions in different configurations, 3D high content screening datasets and cryo-EM montages.

        Speaker: Marvin Albert (Institut Pasteur)
      • 95
        3D quantitative image analysis of cell fate acquisition during lateral inhibition

        Lateral inhibition mediates the adoption of alternative cell fates to produce regular cell fate patterns, with fate symmetry breaking (SB) relying on the amplification of small stochastic differences in Notch activity via an intercellular negative feedback loop. Here, we used quantitative live imaging of endogenous Scute (Sc) to study the emergence of Sensory Organ Precursor cells (SOPs) in the pupal abdomen of Drosophila. We developed 3D image analysis pipeline that denoised the nuclei using Noise2Void, segmented them by intensity-based thresholding, filtered undesired objects based on different criteria. We then tracked the SOP nuclei by Mastodon and developed an interactive webpage based on Dash-Plotly for inspecting and correcting the SOP’s neighbors. We defined fate difference index (FDI) relying on the Sc signals between SOP and its neighbors to identify time point of SB and study the cell fate divergence around the SB. In addition, we used EpySeg and Tissue Analyzer to segment and track apical areas of all cells. Then, the neighboring cells change was inspected over time to score the cell-cell intercalation.

        Speaker: Minh-Son Phan (Institut Pasteur Paris)
      • 96
        A nextflow-based end to end workflow for decoding of in situ sequencing data

        The analysis of gene expression within spatial contexts has greatly advanced our understanding of cellular interactions. Spatial transcriptomics, developed through both indexing-based sequencing and microscopy-based technologies, offers unique benefits and challenges. Indexing-based methods achieve high resolution but are costly, while microscopy-based methods, such as in situ sequencing (ISS), provide a cost-effective alternative for targeted transcriptome profiling. ISS uses multicolor cyclic imaging to decode barcoded transcripts in single cells, allowing extensive multiplexing with no theoretical limit on imaging cycles. This technique has been successfully applied to profiling transcriptomes in mouse brain and human breast cancer tissues and forms the basis of the Xenium platform for various tissues. We introduce ‘iss-nf,’ a Nextflow-based workflow for processing ISS datasets from raw images to transcript maps. This end-to-end pipeline includes stitching, image registration, spot detection, decoding, filtering, and quality control, featuring a novel automated threshold parameter selection algorithm. The pipeline is fast, easy to install, reproducible, robust, and adaptable via nf-core modules. Users can customize the pipeline by utilizing specific modules as needed, offering flexibility and efficiency in ISS data analysis.

        Speaker: Nima Vakili (EMBL/GSK)
      • 97
        A QuPath extension for Data-driven Microscopy

        This project introduces a novel microscope image acquisition plugin for QuPath designed to enhance data-driven image collections by connecting a user-friendly image analysis tool with PycroManager and MicroManager for microscope control. QuPath provides quick and easy tools to generate user-defined annotations within a whole slide image to guide image collections at higher resolutions or with different modalities. Alternatively, acquisition of regions of interest can be automated through scripts. By leveraging the capabilities of QuPath in conjunction with our plugin, researchers can efficiently target and analyze specific tissue areas within large datasets.
        The workflow begins with either a bounding box of stage coordinates, or an overview image collected using a slide scanner. After identifying either the location of the tissue on the slide or subregions of tissue, these areas of interest can be automatically imaged at various resolutions or with different modalities. Targeted acquisitions benefit researchers by reducing total data storage and acquisition time. Utilizing physical space coordinates for the image positions also enables multiple microscopes to be used sequentially, and the images correlated at the end for analysis.

        Speaker: Michael S. Nelson (University of Wisconsin, Madison)
      • 98
        AI-powered analysis of histopathological tissues to study of the tumor immune microenvironment

        "Digital pathology and artificial intelligence (AI) applied to histopathological images are gaining interest in immuno-oncology for streamlining diagnostic and prognostic processes. This study aimed to develop a computational pipeline to analyze H&E-stained cancer tissues and identify clinically relevant tumor microenvironment features.
        Our pipeline employs machine and deep learning algorithms for cell segmentation, classification, tissue segmentation, and spatial analysis exploiting QuPath and RStudio functions. In particular, we trained a random trees classifier to detect tumor cells across whole slides and assess their spatial clustering using Ripley’s K function, classifying patients as “highly clustered,” “poorly clustered,” or “uniformly distributed”. Another classifier was trained to distinguish lymphocytes from other cells, and we calculated their density within and outside the tumor bed, categorizing samples as “immune desert,” “immune excluded,” or “inflamed.” The combination of these AI-based classifiers showed a significant correlation with prognosis.
        AI-powered H&E analysis enabled us to classify samples based on quantitative data, and the integration of tumor and immune predictors yielded clinically relevant results. Once validated, these tools may help identify novel tumor and immune biomarkers."

        Speaker: Rebecca Polidori (University of Milan)
      • 99
        An AI approach for timing analysis of cytokinesis from microscopy data

        "Cell division is a fundamental process in cell biology, which comprises two main phases: mitosis (nuclear division) and cytokinesis (cytoplasmic division). Despite extensive research over several decades, our understanding of cell division remains incomplete. One approach to investigate the role of specific genes in cytokinesis consists in inhibiting the genes in a cell line and to observe the resulting phenotype by life cell imaging. High Content Screening (HCS) allows to perform these loss-of-function experiments systematically under controlled conditions for a large number of genes.

        By comparing the division times between experimental and control conditions, we can identify those genes whose down-regulation significantly influences cytokinesis and which are therefore bona fide candidates for follow-up studies. However, such comparisons are tedious and require extensive manual annotation by a trained biologist.

        Here, we introduce Cut Detector, a tool for the automatic analysis of cell division timing from time-lapse microscopy images. Cut Detector employs an AI approach to carry out the tasks required to monitor cell division: cell segmentation, cell tracking, detection of cell division events and localization of microtubule bridges. The main methodological contribution is the detection of the microtubule cuts that defines an important step in cytokinesis and therefore needs to be identified with high accuracy and robustness.

        Cut Detector is an open-source tool developed within the Napari framework, enabling immediate user adoption. By providing detailed summaries of cell division timings, Cut Detector facilitates large-scale analyses that would be impractical without automation."

        Speaker: Thomas Bonte (Université PSL - Institut Curie)
      • 100
        An AI-based pipeline to extract predictive mechano-features in Triple Negative Breast Cancers

        Despite advancements in oncology, triple-negative breast cancer (TNBC) remains the most aggressive subtype, characterized by poor prognosis and limited targeted treatments, with non-specific chemotherapy as the primary option. To uncover new biological mechanisms, we focus on the mechanobiology of TNBC, examining how cells and tissues perceive and integrate mechanical signals, known as mechanotransduction. The mechanical properties of cells and tissues influence their architecture and function, which is essential for malignant transformation.
        From a cohort of TNBC patients categorized into relapsing (Bound-to-be-Bad-BBB) and non-relapsing (Bound-to-be-Good-BBG) groups, we collected cell-intrinsic and cell-extrinsic features by analyzing immunohistochemical (IHC) and label-free second harmonic generation (SHG) images obtained from two-photon microscopy. We particularly focused on developing and refining imaging pipelines to quantitatively extract structural (density and fiber orientation) and texture features of peritumoral collagen-rich stroma. Using a supervised machine-learning model, we selected the most robust discriminating features and combined them with mechano-driven tumor cell intrinsic properties, such as geometric cell shape parameters.
        Through this approach, we trained a mechano-classifier capable of predicting clinical outcomes for TNBC patients, offering potential new avenues for targeted treatment strategies.

        Speakers: Emanuele Martini, Mattia Tonani (IFOM)
      • 101
        BIOP-desktop , a versioned computer for image analysis in life sciences

        "The handling, analysis, and storage of image data present significant challenges in a wide range of scientific disciplines. As the volume and complexity of image data continue to grow, researchers face key challenges like scalability, analysis speed, reproducibility and collaboration with peers. Containerisation technologies like Docker offer solutions to many of these challenges by providing isolated, consistent, and portable environments for software applications.
        We present BIOP-desktop, a Versioned Computer with software pre-installed and pre-configured allowing focussing on image processing and analysis spending minimal amount of time on installation of software packages in particular for Life Sciences. In this Versioned Computer, every software including its dependencies is fixed, making it possible to reproduce the same analysis workflow at any time by any one. Leveraging the multi-stage build speed-up and ease building and versioning, as it is possible to change a single component and keep the rest of the image unchanged, if needed. Such a Versioned Computer would prove useful for Reproducibility and OpenScience, therefore it eases collaboration, teaching and publication. We’ll discuss our attempt to develop and deploy such a tool, the building strategy, strength and weakness of this solution with the long-term goal to ease access to state-of-the-art image analysis and make analysis workflows shareable and reproducible in a multi user environment."

        Speaker: Romain Guiet (EMBL)
      • 102
        Building Foundations for AI-Driven Bioimage Analysis: Infrastructure and Annotation Platforms

        Bioimage analysis is essential for advancing our understanding of cellular processes, yet traditional methods often fall short in scalability and efficiency. To address these challenges, our research focuses on developing a comprehensive infrastructure integrating data streaming and collaborative annotation for training large foundation models. Our streaming dataloader efficiently manages public datasets on AWS S3, enabling decentralized storage and rapid data access through a unified API. This setup supports the development of advanced bioimaging tools, including a collaborative annotation platform that utilizes the Segment Anything Model (SAM) with a human-in-the-loop approach to enhance dataset quality through crowd-sourced annotations. The platform offers flexible deployment options to ensure data privacy. By combining these technologies, we facilitate the development of advanced automated imaging systems and whole-cell modeling, leveraging AI to simulate cellular behaviors and processes. Our goal is to enhance automated microscopy and create a comprehensive whole-cell simulator, driving transformative insights into cellular research and in-silico drug screening.

        Speaker: Nils Mechtel (KTH Royal Institute of Technology)
      • 103
        Client-Server Approach for Bioimage Analysis in the Deep Learning Era: Enhancing Extensibility and Accessibility

        In the deep learning era, the client-server approach for bioimage analysis offers significant advantages, enhancing both extensibility and accessibility. The image analysis server can be implemented either locally or remotely, enabling efficient resource allocation while offering a wide variety of choices for the client application. Researchers can leverage powerful GPU devices suitable for deep learning-based algorithms, ensuring high-performance hardware utilization and reducing the need for expensive equipment in individual labs.

        The client-server approach supports scalable bioimage analysis pipelines, minimizing the need to modify the original implementation. We highlight its practical applications through examples such as the Segment Anything Model (SAM), human-in-the-loop training for cell segmentation (Cellsparse), and cell tracking (ELEPHANT). Additionally, it integrates well with OME-NGFF, a next-generation file format for bioimage analysis, stored in the cloud, providing efficient data management and accessibility. Once the server is set up, users can access advanced analytical tools and algorithms without extensive programming knowledge or computational resources. This democratizes access to cutting-edge bioimage analysis techniques and accelerates scientific discovery. Our work demonstrates the transformative potential of the client-server approach in bioimage analysis, particularly in harnessing deep learning, making it an indispensable tool in contemporary biological research.

        Speaker: Ko Sugawara (RIKEN BDR)
      • 104
        Combining Incremental Deep Learning, Eye Tracking and Virtual Reality for Human-in-the-Loop Cell Tracking: a Progress Report

        One frequent task performed on high-resolution 3D time-lapse microscopy images is the reconstruction of cell lineage trees. The construction of such lineage trees is computationally expensive, and traditionally involves following individual cells across all time points, annotating their positions, and linking them to create complete trajectories using a 2D interface. Despite advances in automated cell tracking, human intervention remains important, yet tedious, for both correction and guidance of automated algorithms. We propose to combine 3D cell tracking by means of eye tracking in Virtual Reality with an incremental deep learning approach, to accelerate not only the creation of ground truth, but also proofreading. We discuss the current state and structure of the project and the challenges of using natural user interfaces for cell tracking tasks, especially in crowded environments. We detail our planned investigations into the speed and usability compared to conventional tracking methods, and discuss how the inclusion of uncertainty data of the deep learning model along the cell trajectories can guide the user during the training and proofreading process.

        Speaker: Samuel Pantze (CASUS - Center for Advanced Systems Understanding)
      • 105
        Contextual Segmentation of Large, High-dimensional Medical Images

        State-of-the-art serial block face scanning electron microscopy (SBF-SEM) is used in cellular research to capture large 2D images of sliced tissue. These 2D images collectively form a 3D digital representation of the tissue. SBF-SEM was recently used to reconstruct the first 3D ultrastructural analysis of neural, glial, and vascular elements that interconnect to form the neurovascular unit (NVU) in the retina. Identification of relevant cell morphologies enables the examination of heterocellular interactions which aid our understanding of the structure and function of key retinal cells in diseased and healthy states. Disruption of the retinal NVU is thought to underlie the development of several retinal diseases. However, the exact way in which the morphology of the retinal NVU is disrupted at nanoscale has yet to be clarified in 3D due to its structural complexity. Analysis of these images requires the annotation of relevant structures which is currently performed manually and takes several months to complete for a single tissue sample. This work explores a novel approach to automatically annotate these structures to accelerate current investigations and provide opportunities for future studies.

        Speaker: Victoria Porter (Queen's University Belfast)
      • 106
        Continuous, interpretable, and transformation-invariant Morphometric for dynamic shape quantification

        Biological systems undergo dynamic developmental processes involving shape growth and deformation. Understanding these shape changes is key to exploring developmental mechanisms and factors influencing morphological change. One such phenomenon is the formation of the anterior-posterior (A-P) body axis of an embryo through symmetry breaking, elongation, and polarized Brachyury gene expression. This process can be modeled using stem-cell-derived mouse gastruloids (Veenvliet et al.; Science, 2020), which may form one or several A-P axes, modeling both development and disease.

        We propose a way of quantifying and comparing continuous shape development in space and time. We emphasize the necessity of a structure-preserving metric that captures shape dynamics and accounts for observational invariances, such as rotation and translation.

        The proposed metric compares the time-dependent probability distributions of different geometric features, such as curvature and elongation, in a rotationally invariant manner using the signed distance function of the shape over time. This enables the integration of time-dependent probability distributions of gene expression, thus coupling geometric and genetic features.

        Importantly, the metric is differentiable by design, rendering it suitable for use in machine-learning models, particularly autoencoders. This allows us to impose the structure of the shape dynamics in a latent-space representation.

        We benchmark the metric's effectiveness on synthetic data of shape classification, validating its correctness. We then apply the new metric to quantifying A-P body axis development in mouse gastruloids and predict the most likely resulting shapes. This approach can potentially leverage predictive control, enabling the application of perturbations to guide development towards desired outcomes.

        Speaker: Roua Rouatbi (TU Dresden)
      • 107
        Create web-based OME-Zarr galleries with Zarrcade

        Zarrcade is a web application designed to make it easy to browse collections of OME-Zarr images. OME-Zarr is a modern file format gaining popularity in the bioimage community. Despite its cloud compatibility, it's challenging for users to make collections of these images accessible, searchable, and browsable on the web. Zarrcade addresses this by allowing users to quickly create searchable online galleries backed by cloud storage. It integrates with any web-based OME-Zarr-compatible viewer, including Neuroglancer, Vizarr, and the AICS Volume Viewer.

        Speaker: Konrad Rokicki (HHMI Janelia Research Campus)
      • 108
        DaCapo: a modular deep learning framework for scalable 3D image segmentation

        DaCapo is a specialized deep learning library tailored to expedite the training and application of existing machine learning approaches on large near-isotropic image data. In this correspondence, we introduce DaCapo’s unique features optimized for this specific domain, highlighting its modular structure, efficient experiment management tools, and scalable deployment capabilities. We discuss its poten- tial to enhance the efficiency of isotropic image segmentation and invite the community to explore and contribute to this open-source initiative.

        Speaker: Marwan Zouinkhi (HHMI - Janelia)
      • 109
        Data-driven Unsupervised and Sparsely-Supervised Segmentation

        Recent advances in unsupervised segmentation, particularly with transformer-based models like MAESTER, have shown promise in segmenting Electron Microscopy (EM) data at the pixel level. However, despite their success, these models often struggle with capturing the full hierarchical and complex nature of EM data, where variability in texture and the intricate structure of biological components pose significant challenges. To address these limitations, I employ a hierarchical variational autoencoder (VAE), which I believe is better suited for this task due to its ability to naturally capture and represent the hierarchical structure inherent in EM images. This approach, enhanced with contrastive loss and sparse ground truth annotations, effectively structures the latent space, allowing for the clear separation of subcellular structures and improving segmentation accuracy. Additionally, I implement a masking strategy within an inpainting task, where the network predicts masked pixels, ensuring that the latent space robustly represents diverse EM structures. While still being optimized, this method has demonstrated promising progress, aligning with the I2K conference’s goals to transform images into actionable knowledge and offering substantial potential to advance biological research and understanding.

        Speaker: Sheida Kordasiabi (Human Technopole)
      • 110
        Digital pathology and artificial intelligence-based approaches to characterize the complex interactions between cellular components of the tumor microenvironment and their spatial distribution

        Digital pathology combined with AI is revolutionizing oncoimmunology by enhancing diagnostic workflows and analytical outputs. This study integrates different histopathological methods with high-throughput computational imaging to analyze the tumor microenvironment (TME).
        We began by analyzing tumor tissue and structure using H&E-stained slides and computational methods. A deep learning algorithm was trained to identify tumor cells, and their spatial distribution was examined using Ripley’s K-function. The K-score categorizes each tumor spot as diffuse, poorly clustered, or highly clustered.
        To further investigate TME interactions, we employed the Hyperion™ Imaging System. This system preserves tissue architecture and cell morphology while allowing the simultaneous analysis of 23 markers related to tumor cells, tissue structure, and immune cells. Multiparametric computational analysis of the IMC images enabled us to distinguish between tumor and stromal tissues and evaluate immune cell populations in tumor nests versus fibrotic stroma.
        In poorly and highly clustered samples, we examined tumor heterogeneity, focusing on immune cell interactions and tumor cell distribution. Our goal is to uncover spatial patterns and cell interactions at the single-cell level, leading to clinically relevant tumor patient profiles.

        Speaker: Marika Viatore (Università degli Studi di Milano)
      • 111
        Enabling Access to Bioimage Analysis Tools on University Cluster

        High-performance computers (HPC) are essential for bioimage analysis, however the barrier to entry can be high. This project aims to simplify access to bioimage analysis tools and deep learning models on local HPC clusters, enabling frictionless access to software and large computation.

        Inspired by the Bioimage ANalysis Desktop (BAND) and ZeroCostDL4Mic, we developed lightweight bash scripts to deploy image analysis tools such as Fiji, QuPath, Ilastik, Cellpose, CellProfiler, and Napari in user-specified directories. Unlike containers, this solution allows users to save software changes, such as installed plugins, across sessions. We also created custom module files for deep learning packages like StarDist, SAM, and micro-SAM for easy environment loading. Finally, we developed Jupyter Notebooks for data preparation, model training, and benchmarking. These have been deployed on Harvard Medical School’s (HMS) HPC cluster, Orchestra 2 (O2), which uses Open OnDemand (OOD) for an interactive interface.

        While specific to HMS, this approach can be easily adapted to most HPC clusters. We aim to share our findings with the broader bioimage analysis community and discuss alternative or parallel approaches.

        Link: https://hms-iac.github.io/Bioimage-Analysis-on-O2/

        Speaker: Ranit Karmakar (Harvard Medical School)
      • 112
        Enhanced Bacterial Cytological Profiling with super resolution techniques and fluorescent D amino acids

        Determining mechanism of action (MoA) for antimicrobial compounds is key in antibiotic discovery efforts. Bacterial Cytological Profiling (BCP) is a rapid one-step assay utilising fluorescent microscopy and machine learning to discriminate between antibacterial compounds with different MoAs and help predict the MoA of novel compounds. One barrier to BCP being adopted more widely is a lack of open-source analysis methods and data sets. My PhD project focuses on developing open-source protocols and image analysis pipelines along with enhancing the BCP method and extending it to be high-throughput compatible. Currently the BCP method has no direct readout on the bacterial cell wall, a key antibiotic target. Fluorescent D-amino acids can be used to examine cell wall synthesis as they are incorporated directly into the structure and so their potential use in BCP to provide information on cell wall perturbations merits exploration. To increase the capability of BCP in detecting complex morphological phenotypes, the feasibility of integrating super-resolution imaging via structured illumination microscopy (SIM) and the analytical Super-Resolution Radial Fluctuations (SRRF) algorithm also warrants investigation.

        Speaker: Joseph Ratcliff (University of Warwick)
      • 113
        Enhancing Synaptic-Resolution Connectomics with an Open-Source AI Ecosystem

        While recent advancements in computer vision have greatly benefited the analysis of natural images, significant progress has also been made in volume electron microscopy (vEM). However, challenges persist in creating comprehensive frameworks that seamlessly integrate various machine learning (ML) algorithms for the automatic segmentation, detection, and classification of vEM across varied resolutions (4 to 100 nanometers) and staining protocols. We aim to bridge this gap with Catena (github: https://github.com/Mohinta2892/catena/tree/dev): a unified, reproducible and cohesive ecosystem designed for large-scale vEM connectomics. Catena combines conventional deep learning methods with generative AI techniques to minimise model bias and reduce labour-intensive ground-truth requirements. This framework integrates existing state-of-the-art algorithms for - a) neuron/organelle segmentation, b) synaptic partner detection, c) microtubule tracking, d) neurotransmitter classification and e) domain adaptation models for EM-to-EM translation, while identifying and addressing limitations in these methods. As an open-source software framework, Catena equips both large and small labs with powerful and robust tools to advance scientific discoveries with their own vEM datasets at scale.

        Speaker: Samia Mohinta (University of Cambridge, UK)
      • 114
        FISBe: A real-world benchmark dataset for instance segmentation of long-range thin filamentous structures

        Instance segmentation of neurons in volumetric light microscopy images of nervous systems enables groundbreaking research in neuroscience by facilitating joint functional and morphological analyses of neural circuits at cellular resolution. Yet said multi-neuron light microscopy data exhibits extremely challenging properties for the task of instance segmentation: Individual neurons have long-ranging, thin filamentous and widely branching morphologies, multiple neurons are tightly inter-weaved, and partial volume effects, uneven illumination and noise inherent to light microscopy severely impede local disentangling as well as long-range tracing of individual neurons. These properties reflect a current key challenge in machine learning research, namely to effectively capture long-range dependencies in the data. While respective methodological research is buzzing, to date methods are typically benchmarked on synthetic datasets. To address this gap, we released the FlyLight Instance Segmentation Benchmark (FISBe) dataset, the first publicly available multi-neuron light microscopy dataset with pixel-wise annotations. In addition, we defined a set of instance segmentation metrics for benchmarking that we designed to be meaningful with regard to downstream analyses. Lastly, we provide three baselines to kick off a competition that we envision to both advance the field of machine learning regarding methodology for capturing long-range data dependencies, and facilitate scientific discovery in basic neuroscience.

        Speaker: Lisa Mais (Max Delbrück Center for Molecular Medicine in the Helmholtz Association (Max Delbrück Center))
      • 115
        FLUTE: A Python GUI for interactive phasor analysis of FLIM data

        Fluorescence lifetime imaging microscopy (FLIM) is a powerful technique used to probe the local environment of fluorophores. Phasor analysis is a fit-free technique based on a FFT transformation of the intensity decay that provides a visual distribution of the molecular species, clustering pixels with similar lifetimes even when they are spatially separated in the image. Phasor analysis is increasingly being used due to its ease of interpretation. Here, we present Fluorescence Lifetime Ultimate Explorer (FLUTE), an open-source graphical user interface (GUI) for phasor analysis of FLIM data programmed in Python [1,2]. FLUTE simplifies and automates many aspects of the analysis of FLIM data acquired in the time domain, such as calibrating the FLIM data, performing interactive exploration of the phasor plot, displaying phasor plots and FLIM images with different lifetime contrasts simultaneously, and calculating the distance from known molecular species. After applying desired filters and thresholds, the final edited datasets can be exported for further user-specific analysis. FLUTE has been tested using several FLIM datasets including autofluorescence of zebrafish embryos and in vitro cells. In summary, our user-friendly GUI extends the advantages of phasor plotting by making the data visualization and analysis easy and interactive, allows for analysis of large FLIM datasets, and accelerates FLIM analysis for non-specialized labs.

        References:
        [1] D Gottlieb, B Asadipour, P. Kostina, TPL Ung, C Stringari. FLUTE: a Python GUI for interactive phasor analysis of FLIM data. Gottlieb D, Asadipour B, Kostina P, Ung TPL, Stringari C. FLUTE: A Python GUI for interactive phasor analysis of FLIM data. Biological Imaging. 2023;3:e21.
        [2] https://github.com/LaboratoryOpticsBiosciences/FLUTE

        Speaker: Chiara Stringari (CNRS - Ecole Polytechnique)
      • 116
        Foreground-aware virtual staining for 3D nuclear morphometry

        The 3D morphology of the cell nucleus is traditionally studied through high-resolution fluorescence imaging, which can be costly, time-intensive, and have phototoxic effects on cells. These constraints have spurred the development of computational ""virtual staining"" techniques that predict the fluorescence signal from transmitted-light images, offering a non-invasive and cost-effective alternative.

        However, suitability of virtual staining for 3D nuclear morphological analysis hasn’t been evaluated. Due to being typically trained with pixel-wise loss functions, virtual staining models learn to predict contents of whole fluorescence images, including noise, background, and imaging artifacts, making 3D segmentation of virtually-stained nuclei challenging.

        To address this, we introduced a foreground-aware component to training of virtual staining models. Specifically, we threshold fluorescence target images and direct the model to accurately learn foreground pixels. We also soft-threshold predictions using a tunable sigmoid function and calculate the Dice loss between target and predicted foreground areas, effectively balancing foreground pixel-level accuracy with locations and morphological properties of nuclei.

        Our evaluations indicate that our model predicts cleaner foreground images, excels in representing the morphology of nuclei, and improves segmentation and featurization results.

        Speaker: Paula Llanos (Broad Institute)
      • 117
        Fractal: An open-source framework for reproducible bioimage analysis at scale using OME-Zarrs

        Analyzing large amounts of microscopy images in a FAIR manner is an ongoing challenge, turbocharged by the large diversity of image file formats and processing approaches. Recent community work on an OME next-generation file format offers the chance to create more shareable bioimage analysis workflows. Building up on this and to address issues related to the scalability & accessibility of bioimage analysis pipelines, the BioVisionCenter is developing Fractal, an open-source framework for processing images in the OME-Zarr format. The Fractal framework consists of a server backend & web-frontend that handle modular image processing tasks. It facilitates the design and execution of reproducible workflows to convert images into OME-Zarrs and apply advanced processing operations to them at scale, without the need for expertise in programming or large image file handling. Fractal comes with pre-built tasks to perform instance segmentation with state-of-the-art machine learning tools, to apply registration, and to extract high-dimensional measurements from multiplexed, 3D image data at the TB scale. By relying on OME-Zarr-compatible viewers like napari, MoBIE and ViZarr, Fractal enables researchers to interactively visualize terabytes of image data stored on their institution’s remote server, as well as the results of their image processing workflows.

        Speaker: Joel Lüthi (University of Zurich)
      • 118
        How To Train Your Image Analyst: Perspectives from Upskilled Biologists

        As the field of biological imaging matures from pure phenotypic observation to machine-assisted quantitative analysis, the importance of multidisciplinary collaboration has never been higher. From software engineers to network architects to deep learning experts to optics/imaging specialists, the list of professionals required to generate, store, and analyze imaging data sets of exponentially increasing size and complexity is likewise growing. Unfortunately, the initial training of these experts in disparate fields (computer science, physics, biology) promotes the development of information silos that lack of a common parlance to facilitate collaboration. Here, we present the perspective of a two-person light microscopy core facility associated with the US National Institutes of Health (NIH), the Twinbrook Imaging Facility. The multidisciplinary education of our team members (biology, microscopy, and image analysis), along with our unique funding structure (a fixed budget rather than a fee-for-service model), allows us to develop long-term and productive collaborations with subject matter experts while promoting the exchange of important ideas. We highlight recent and ongoing projects at the facility that demonstrate the importance of skills diversity in core facility staffing.

        Speaker: Maria Traver (NIH / NIAID)
      • 119
        InstanSeg: a fast, flexible and user-friendly cell segmentation method for brightfield and multichannel images

        Quantitative analysis of bioimaging data often depends on the accurate segmentation of cells and nuclei. This is both especially important and especially difficult for the analysis of highly multiplexed imaging data, which can contain many input channels. Current deep learning-based approaches for cell segmentation in multiplexed images require simplifying the input to a small and fixed number of channels, discarding relevant information in the process. Here, we first describe a novel deep learning strategy for nucleus and cell segmentation with fixed input channels and show that it outperforms the most widely used current methods across a range of public datasets, both in terms of F1 score and processing time. We then introduce a novel deep learning architecture for generating informative three-channel representations of multiplexed images, irrespective of the number or ordering of imaged biomarkers. Using these two novel techniques in combination, we set a new benchmark for the segmentation of cells and nuclei on public multiplexed imaging datasets. To maximize the usefulness of our methods, we provide open-source implementations for both Python and QuPath.

        Speaker: Thibaut Goldsborough (University of Edinburgh)
      • 120
        Integrating Shape and Function: Identifying growth drivers and their morphological expression in Gastric Tumor Organoids.

        Cancer, a pervasive global health concern, particularly affects the gastrointestinal (GI) tract, contributing to a significant portion of cancer cases worldwide. Successful treatment strategies necessitate an understanding of cancer heterogeneity, which spans both inter- and intra-tumor variability. Despite extensive research on genetic and cellular heterogeneity, morphological diversity in tumors remains underexplored. Tumor organoids, three-dimensional models mirroring in vivo tumors, offer a promising avenue for investigating tumor dynamics and serve as ethical alternatives to animal testing. This thesis investigates cancer heterogeneity through a per-vertex analysis of four established biomarkers--DAPI, phalloidin, Ki67, and PHH3--coupled with curvature estimation. These biomarkers, indicative of various cellular processes, offer insight into tumor behavior and dynamics. Additionally, shape analysis through curvature estimation provides structural information crucial for understanding tumor morphology. The derived features from this analysis are then utilized to predict tumor progression stages using machine learning techniques. By integrating morphological and biomarker data, this study offers a holistic approach to characterizing tumor organoids and elucidating their potential implications for cancer research and treatment strategies.

        Speaker: Maleeha Hassan (Technical University of Dresden)
      • 121
        Integrative Open-Source Analysis Pipeline of RNA In Situ Hybridization Immunofluorescence Images

        Scalable integration of high throughput open-source image analysis software to quantify pancreatic tissue remains elusive. Here we demonstrate an integration of Cellpose, Radial Symmetry-Fluorescent In Situ Hybridization (RS-FISH), and Fiji for a per cell assessment of mRNA copy number after RNA in-situ hybridization (RNAscope). Pipeline performance was tested against murine pancreata probed for B lymphoma Mo-MLV insertion region 1 homolog (Bmi1) mRNA at different time points following caerulein-induced pancreatitis to ascertain the correlation between Bmi1 expression in acinar cells and tissue recovery. Previously, no descriptive, publicly developed pipeline was equipped to address the morphologic heterogeneity associated with acinar cells, staining quality, and variation in appearance of subcellular objects. This novel pipeline leverages the segmentation and machine learning capabilities of Cellpose, the diversity of RS-FISH subcellular detection parameters, and versatility of Fiji to integrate various data outputs. The utilization of scripted automation generates a remarkably efficient synchrony of software strengths. In alignment with the pipeline’s guiding principles, integration of other specialized software to produce uniquely dynamic data remains innately possible.

        Speaker: Nur Muhammad Renollet (University of Michigan)
      • 122
        Investigating DNA damage with Atomic Force Microscopy and TopoStats

        DNA's flexible structure and mechanics are intrinsically linked to its function. Damage to DNA disrupts essential processes, increasing cancer risk. However, it can be exploited in cancer therapeutics by targeting the DNA in cancer cells. The relationship between DNA damage and its mechanics is not well understood. New-generation metallodrugs offer a promising route for anticancer therapies - this study examines two such drugs: Triplatin, which compacts DNA, preventing transcription, and Copper-Oda, which generates reactive oxygen species that damage the DNA.

        Using Atomic Force Microscopy, we detect and localise DNA damage caused by these drugs on single molecules with submolecular resolution. We employ our open-source analysis programme, TopoStats, to quantify these damage processes and relate them to DNA mechanics. For Triplatin, we quantify drug-driven DNA compaction and aggregation. For Cu-Oda, we observe single and double-stranded DNA breaks enhanced by a reducing agent found at higher levels in cancer cells.

        Understanding metallodrug-driven DNA damage is vital for the further development of new, more targeted cancer therapeutics. Our methodology utilises an open-source image analysis pipeline to probe and quantify various drug-driven DNA conformational changes.

        Speaker: Thomas Catley (University of Sheffield)
      • 123
        Joined segmentation of nuclei and cells

        Deep learning has revolutionized instance segmentation, i.e. the precise localization of individual objects. In microscopy, the two most popular approaches, Stardist and Cellpose, are now used in routine to segment nuclei or cells. However, some specific applications might benefit from the segmentation of both nuclei and cells. For example, multi/hyperplexing imaging show cells associated with nuclear, membranar and cytoplasmic markers. The identification of nuclear and cytoplasmic masks can improve cell phenotyping, consisting of matching cells with their associated markers. In this study, we propose a new approach to jointly segment nuclei and cells. We take advantage of TissueNet, a very large dataset with images showing nuclear and cytoplasmic channels for which we ensure that each nuclear mask is associated with a cell mask. We then train a deep learning network based on the Cellpose architecture to jointly segment nuclei and cells and evaluate its performance when compared to the separate segmentation of nuclei and cells applied to the same 2-channels images.

        Speaker: Thierry Pécot (Rennes University)
      • 124
        Leonardo: a toolset to remove sample-induced aberrations in light sheet microscopy images

        Light-sheet fluorescence microscopy (LSFM) or selective plane illumination microscopy (SPIM) is the method of choice for studying organ morphogenesis and function as it permits gentle and rapid volumetric imaging of biological specimens over days. In such inhomogeneous samples, however, sample-induced aberrations, including absorption, scattering, and refraction, degrade the image, particularly as the focal plane penetrates deeper into the sample. Here, we present Leonardo, the first complete toolbox with three major submodules that address the major artifacts: (1) Destripe removes the stripe artifacts in LSFM caused by light absorption; (2) ViewFusion reconstructs one single high-quality image from dual-sided illumination (and detection) while eliminating optical distortions (ghosts) caused by light refraction; and (3) Viewfinder finds in multi-view imaging the region-dependent best angle that delivers the richest information of the sample.

        Speaker: Yu Liu (Technical University of Munich, TUM)
      • 125
        Mastodon – a Large-Scale Tracking and Track-Editing Framework for Large, Multi-View Images and Extensions for 3D Visualization and Lineage Comparison

        Advancements in microscopy technologies, such as light sheet microscopy, allow life scientists to acquire data with better spatial and temporal resolution, enhancing potential insights into cellular, developmental, and stem cell biology. This has generated a need for robust computational tools to analyze large-scale image data.
        We present Mastodon, a plugin for the ImageJ software, designed for cell and object tracking, capable of handling terabyte-sized datasets with millions of cell detections. Optimized data structures keep Mastodon responsive on consumer hardware, making it ideal for interactive use cases such as manual tracking, data inspection, and ground truth curation in large datasets. Additionally, algorithms and deep learning-based extensions facilitate (semi-) automatic tracking.
        In this contribution, we will highlight extensions for 3D visualization and comparison of stereotypically developing embryos. They have been used to compare multiple embryos of the same species, analyze similarities and differences, correct tracking errors, and gain new insights.
        The Mastodon Blender View utilizes the open source Blender 3D modeling software to visualize tracking data in 3D, allowing for dynamic viewing from all angles and high-quality movie rendering. The lineage classification extension is demonstrated by grouping lineage trees based on their structure, revealing patterns in cell division that are linked to cell fate.

        Speakers: Matthias Arzt (MPI-CBG), Stefan Hahmann (TU Dresden)
      • 126
        METEOR: Enabling precise 3D correlative cryo-FIB milling for high throughput cryo-ET lamella production

        In-situ cryo electron tomography (cryo-ET) is a powerful technique to visualize cellular components in native context and near-atomic resolution. The workflow often involves the use of a focused ion beam (FIB) to thin down the vitrified cell into a ~200 nm thick “cryo lamella”. Considering the low abundance of certain subcellular structures and the extremely limited volume of the cryo lamella, two of the most important challenges in the workflow are as follows: 1) Localizing the cells that contain the regions of interest (ROIs) (2D localization)
        2) Capturing the ROIs in the limited volume of the lamella during FIB milling (3D localization) To address these challenges, we present a high-quality integrated fluorescent microscope, called METEOR. Together with its dedicated and user-friendly software ODEMIS, METEOR provides fast and accurate 2D correlation between fluorescence (FLM) and scanning electron microscopy (SEM) images resulting in efficient 2D localization of the ROIs. Moreover, the high-resolution METEOR z-stacks can effectively be used for precise 3D correlation between FLM and FIB images, resulting in an optimized and targeted FIB milling process with high throughput.

        Speaker: Kevin Homberg (Delmic Cryo B.V)
      • 127
        micronuclAI: Automated quantification of micronuclei for assessment of chromosomal instability

        Chromosomal instability (CIN) is a hallmark of cancer that drives metastasis, immune evasion and treatment resistance. CIN results from chromosome mis-segregation events during anaphase, as excessive chromatin is packaged in micronuclei (MN). CIN can therefore be effectively quantified by enumerating micronuclei frequency using high-content microscopy. Despite recent advancements in automation through computer vision and machine learning, the assessment of CIN remains a predominantly manual, time-consuming, and imprecise labor, which limits throughput and may result in interobserver variability. Here, we present micronuclAI, a novel pipeline for automated and reliable quantification of micronuclei of varying size, morphology and location from nuclei-stained images. The pipeline was evaluated against manual single-cell level counts by experts as well as against routinely used micronuclei ratio within the complete image. The classifier was able to achieve a weighted F1 score of 0.937 on the test dataset and the complete pipeline can achieve close to human-level performance on various datasets derived from multiple human and murine cancer cell lines. The pipeline achieved an R2 of 0.87 and a Pearson’s correlation of 0.938 on images obtained at 10X magnification. We also tested the approach on a publicly available image data set (obtained at 100X) and achieved an R2 of 0.90, and a Pearson’s correlation of 0.951. By identifying single cell associated biomarkers, our approach enhances accuracy while reducing the labor-intensive nature of current methodologies. Additionally, we provide a GUI-implementation for easy access and utilization of the pipeline.

        Speaker: Miguel A. Ibarra-Arellano (Institute for Computational Biomedicine, University Hospital Heidelberg)
      • 128
        MMV_H4Cells - a cell evaluation napari plugin

        Microscopy image data is abundant today, but evaluating it manually is nearly impossible. For instance, to study drug-induced cell morphology changes in prenatal cardiomyocytes, researchers acquire high-resolution images of stained tissue slides. These images can contain over 10,000 cells and nuclei. However, manual evaluation is time-consuming and often lacks sufficient capacity.

        To address this challenge, extensive research efforts have focused on developing methods, including AI-based approaches like Cellpose and classic segmentation algorithms. Bioimaging data’s high heterogeneity makes consistent segmentation quality challenging, impacting fully automated downstream analysis. To bridge this gap, we created the napari plugin MMV_H4Cells. It combines state-of-the-art segmentation methods with human expertise for high-quality downstream analysis with minimal effort.

        MMV_H4Cells enables experts to review cell segmentation efficiently. Users evaluate segmented cells one by one, curating them manually before acceptance or rejection. The tool provides real-time insights, and users can even draw missed cells themselves. MMV_H4Cells supports result export and re-import, allowing continuity across users and time.

        Find our code at https://github.com/MMV-Lab/mmv_h4cells. We aim to enhance bioimaging analysis, complementing existing tools while minimizing human effort.

        Speaker: Lennart Kowitz (Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V.)
      • 129
        napari-phasors: Integrating Hyperspectral and FLIM Data Analysis into an Open-Source Bioimaging Framework

        Hyperspectral imaging (HSI) and fluorescence lifetime imaging microscopy (FLIM) have revolutionized bioimaging by introducing new dimensions of data analysis. HSI combines imaging and spectroscopy to capture detailed fluorescence spectra, allowing simultaneous measurement of many fluorophores in a single field-of-view. FLIM provides an extra temporal resolution by measuring photon arrival times, yielding fluorescence lifetimes that reflect changes in molecular conformation and environment.
        These techniques bring new data insights but increase data volume and analysis complexity. Standalone proprietary software exists for such analysis, but they lack integration into a broader image analysis framework. Bringing phasor analysis into napari, an open-source image viewer library, fosters the integration of HSI and FLIM data analysis into bio-image analysis workflows.
        We present napari-phasors, a plugin that simplifies the quantification and segmentation of cells and tissues at the single-pixel level based on their fluorescence spectrum or lifetime profiles. Built upon the PhasorPy library, which was inspired by SimFCS software, napari-phasors converges and enhances the best features of existing plugins, napari-hsi-phasor and napari-flim-phasor-plotter, promoting accessibility to higher dimensional data analysis in napari.

        Speaker: Marcelo Leomil Zoccoler (DFG Cluster of Excellence Physics of Life - TU Dresden)
      • 130
        PIXIMI: A web-based deep learning tool for biomedical image analysis

        Creation of machine learning networks for biological imaging tasks often suffers from a crucial gap: the subject matter experts who understand the images well are not typically computationally comfortable training neural networks, and lack simple ways getting started to do so. We present here Piximi (piximi.app), a free, open-source "Images To Discovery" web app designed to make it easy to train and deploy neural networks for image classification, object classification, and segmentation directly in the web browser, meaning users face no installation hurdles and can access web accessibility features such as in-browser translation and can be run on any device with a web browser, including mobile devices. Piximi allows users to upload 8-bit or 16-bit images that use common image formats such as png and tiff and supports arbitrary numbers of channels and z-planes. Project data, including images, annotations, categories, etc. can be saved to a project file, allowing for multi-session use and encouraging reproducibility/sharing. All data stays local on the user's machine, and training happens inside the browser; no data is transmitted to a central server unless explicitly stated. Due to this architecture, Piximi provides the option to be run offline from a locally hosted Docker container or on a user’s personal computer. After loading images into Piximi, users can label the images using any number of created categories. Loaded images can also be annotated via 9 in-app annotation tools, from simple bounding boxes to sophisticated superpixel-based color annotation. Models can be trained based on a small subset of the user’s data and then used to predict on a larger number of unclassified images, simplifying the identification of difficult-to-classify images. In an effort to provide robust functionality without overwhelming less computationally comfortable users, hyperparameters can be tuned in a simple interface and are hidden by default. Models trained by Piximi can then be exported and run on any other device/context that uses Tensorflow, meaning networks trained in-browser on smaller subsets of data can then be applied to arbitrarily large data sets that would not be appropriate for the browser.Users can upload image annotations in common specifications such as COCO and Stardist,or segment object(s) of their choosing using the app's built-in segmentation neural networks or, in some circumstances, an uploaded Tensorflow model from elsewhere. Identified objects can be subsequently classified using Piximi's classifier functionality. In summary, we believe this tool will thus help close the gap between scientists who want to use neural networks, and those who can, accelerating bioimage-based science in many domains.

        Speaker: Nodar Gogoberidze (Broad Institute)
      • 131
        Potentials and limitations in the application of Convolutional Neural Networks for mosquito species identification using wing images

        Mosquito-borne diseases pose a major global health threat, especially in tropical and subtropical regions, worsened by global warming. Mosquito species identification is vital for research, but traditional methods are costly and require expertise. Deep learning offers a promising solution, yet Convolutional Neural Network (CNN) models often perform well only in controlled environments. Typically, model development involves standardised frameworks that overlook biases and underspecification in trained models. Our research aimed to create a reliable classification system for real-world use, enhancing model adaptability, addressing dataset biases, and ensuring usability across various users and devices.

        We built a dataset of over 14,000 wing images from 68 mosquito species using three different imaging devices. Our analysis highlighted how imaging devices and biases affect CNN performance in out-of-distribution scenarios. We identified strategies to improve model reliability using augmentation and pre-processing techniques, achieving a balanced accuracy of 94.5% in classifying 21 mosquito species. We also developed a user-friendly application for species identification. Our study demonstrates the potential of CNNs for mosquito species identification and provides insights into developing robust classification systems for vector surveillance and research.

        Speaker: Kristopher Nolte (Bernhard-Nocht-Institute for Tropical Medicine)
      • 132
        Probabilistic Framework for Calibrated Cell Tracking

        Cell tracking is a key computational task in live single-cell microscopy. With the advent of automated high-throughput microscopy platforms, the amount of data quickly exceeds what humans are able to overlook. Thus, reliable and uncertainty-aware data analysis pipelines to process the collected amounts of data become crucial. In this work, we investigate the problem of quantifying uncertainty in cell tracking. We present a statistical framework for cell tracking which accommodates many existing tracking methods from the tracking-by-detection paradigm. Based on this framework, we discuss methods inspired from common classification problems for considering the calibration of tracking uncertainties and leveraging estimates thereof for more robust and improved tracking. We benchmark the different approaches on data from the Cell Tracking Challenge and a large-scale microbial dataset using various tracking methods, including the very recently presented Transformer-based tracking.

        Speaker: Richard D. Paul (Forschungszentrum Jülich GmbH)
      • 133
        Pycellin: a graph-based framework to analyze cell lineages

        With live cell microscopy and image analysis, we can follow dividing cells or bacteria to obtain cell lineages. From these lineages, we can study cellular dynamics at different scales: during one cell cycle, over multiple cell cycles, or comparatively between sister and cousin cells. However, while a few frameworks have emerged to analyze tracking data, they focus on cell migration and not on cell cycle measurements and cell lineage analysis.
        To fill this gap, I am building Pycellin, a graph-based framework to easily manipulate and extract information from cell lineages. Pycellin can enrich these lineages with classical tracking and morphological features such as division rate and growth rate. It also allows user-defined features to accommodate the wide variety of experiments and biological questions. In the long term, Pycellin aims to become a data structure for cell and bacteria tracking data in scientific software, fostering interoperability between tracking tools such as TrackMate and the CTC file format.
        I will illustrate the capabilities of Pycellin in the context of bacterial growth and the effects of mechanical confinement on bacterial physiology.

        Speaker: Laura Xénard (Institut Pasteur)
      • 134
        Quantifying bacterial dynamics over a wide range of species with TrackMate-Omnipose

        Cell tracking and lineage provide unique insights to study bacterial growth and dynamics. Tracking strongly relies on segmentation quality, and integrating accurate and robust segmentation algorithms is a key challenge when developing end-to-end tracking tools. Omnipose, a state of the art deep learning algorithm developed for bacteria segmentation, proved to outperform more traditional segmentation approaches. Here we introduce TrackMate-Omnipose, a user-friendly tracking interface, leveraging Omnipose for bacterial tracking and lineaging, for the Fiji ecosystem.
        We provide a general methodology to use TrackMate-Omnipose and present several applications exemplifying a wide range of use cases to illustrate the usability, performance and robustness of the tool. In particular, we rely on automatic parameter estimation to choose the best algorithms and their optimal parameters for a specific application. We also introduce tools to conveniently annotate segmentation and tracking results within TrackMate, and use them to train custom DL models for segmentation and generate tracking ground truth.

        Speaker: Marie Anselmet (Institut Pasteur)
      • 135
        Quantifying the Heterogeneity of DNA Interactions in AFM Images

        Although we have sequenced the entire genome, we still do not understand how many diseases evolve and progress. This is partly due to the challenge in observing the nanometre scale interactions of flexible DNA molecules, and the large conformational landscape that can encourage or inhibit some protein binding mechanisms. Our atomic force microscopy (AFM) techniques can probe biomolecular structures in native-like states and image single molecules with sub-molecular resolution. However, current analysis tools do not readily facilitate the quantification of these observations, resulting in predominantly qualitative analyses and thus subtle phenomena within the images often go undetected. This project uses machine learning clustering techniques to identify and quantify the heterogeneity present in mixed samples, distinguishing between distinct biomolecules such as DNA and protein, and combining this with autoencoders to identify further conformational variation present within the populations of the same sample. Our methods enable quantification of conformational changes induced by DNA supercoiling and the implications these have upon protein binding affinity, demonstrating how AFM can be employed to precisely quantify and characterise the structure of DNA and its interactions.

        Speaker: Max Gamill (University of Sheffield)
      • 136
        RNA point cloud segmentation for image-based spatial transcriptomics

        Recent advancements in image-based spatial RNA profiling enable to resolve tens to hundreds of distinct RNA species with high spatial resolution. In this context, the ability to assign detected RNA transcripts to individual cells is crucial for downstream analysis, such as in-situ cell type calling. To this end, a DAPI channel is acquired, from which nuclei can be segmented by state-of-the art segmentation approaches. However, identifying the boundaries of individual cells remains challenging as membrane marker staining is often absent or with variable quality across the tissue and different cell types.

        To address this issue, we introduce two segmentation algorithms, each solving a specific analysis challenge. First we introduce ComSeg [1] (https://github.com/fish-quant/ComSeg), a segmentation algorithm that operates directly on single RNA and nuclei positions without requiring cell membrane staining. ComSeg relies solely on the analysis of gene co-expression and spatial graph clustering.

        While modern spatial transcriptomic methods, such as Xenium, MERFISH, and CosmX, provide channels with membrane stains, these markers exhibit heterogeneous quality across tissues and cell types [2]. In this context, we propose RNAseg, a deep learning based method for cell segmentation. Contrary to ComSeg, RNAseg leverages partial cell membrane stainings in addition to the RNAs point. RNAseg automatically detects high-quality membrane staining areas on which it learns the recurrent RNA spatial distributions within cells. RNAseg then leverages these learned RNA patterns to perform segmentation in areas missing membrane staining.

        Through comprehensive evaluations, we show that both ComSeg and RNAseg outperform existing state-of-the-art methods for in-situ single-cell RNA profiling each on different experimental datasets .

        [1] Defard, T., Laporte, H., Ayan, M. et al. A point cloud segmentation framework for image-based spatial transcriptomics. Commun Biol 7, 823 (2024).
        [2] Petukhov, V., Xu, R.J., Soldatov, R.A. et al. Cell segmentation in imaging-based spatial transcriptomics. Nat Biotechnol 40, 345–354 (2022). https://doi.org/10.1038/s41587-021-01044-w

        Speaker: Thomas Defard (Institut Pasteur, Ecole des Mines)
      • 137
        Robust detection and quantification of beating cells in microscopic 2D videos of cardiomyocytes

        Advanced cell cultures based on human stem cells are of great interest to improve human safety and reduce, refine, and replace animal tests when evaluating substances. Human induced pluripotent stem cells (hiPSCs) can be turned into beating heart muscle cells known as cardiomyocytes, which model the early developmental stages of the embryonic heart. It is essential to reliably identify these cardiomyocytes' beating area and frequency to detect any potential negative impacts on their ability to contract.
        This study introduces a method for identifying and quantifying beating cells in microscopic videos of cardiomyocytes using the Lucas-Kanade method. Our approach is designed for high accuracy and is robust against typical potential confounders, ensuring reliable results. Empirical and theoretical analysis supports these claims and provides guarantees for the algorithm's performance. Our approach accurately reproduces the concentration responses of well-established developmental neurotoxic compounds and controls, demonstrating its potential for medium- or high-throughput in vitro screening studies. Therefore, our approach can be used to study compounds' effects on the embryonic heart's early developmental stages in an automated manner, and capture the underlying processes as quantitative measurements.

        Speaker: Oleksiy Nosov (DNTOX GmbH)
      • 138
        Robust segmentation and measurement of single bacterial cells within a chain

        Phase-contrast microscopy has become the gold standard to determine the shape and track the growth of individual bacterial cells, contributing to the identification of different cellular morphologies and to the development of models for cell size homeostasis. Although phase-contrast microscopy is well suited to the characterization of well separated, individual cells—such as those formed by traditional model bacterial such as Escherichia coli or Caulobacter crescentus—it falls short when it comes to the identification of individual cells of species that grow forming long chains containing multiple cells, which is a common feature in the bacterial world. As the septa that separate the different cells within a chain are not visible in phase contrast images, segmentation models fail to identify individual cells and often segment a whole chain as a single cell. This limitation can be circumvented by using cell-wall or membrane fluorescent dyes that outline the boundaries of individual cells within a chain. However, we are currently lacking models for the precise segmentation of cells based on the fluorescent staining of their envelopes. Here, we have combined Content Aware Restoration techniques (Weigert et al. 2018) for denoising and deconvolution with an Omnipose-based custom instance segmentation (Cutler et al. 2022) to precisely and reproducibly identify single cells within chains stained with the fluorescent membrane dye FM 4-64. Another path we show is using a fluorescence translation model to convert a cytoplasmic fluorescence into its corresponding membrane outline for segmentation. Our model is able to successfully segment individual cells within chains, and is robust to fluorescence artifacts and image noise that would lead to erroneous segmentation using more general models. We have coupled membrane-based segmentation with a size quantification pipeline to accurately extract length, width and volume from individual cells within a chain. To illustrate the utility of our membrane-based segmentation pipeline, we have characterized the size of a panel of bacterial species that grow forming long chains of cells. Our results have revealed unexpected morphological features that would have been masked by using standard phase-contrast-based segmentation, opening the door to future studies in chain-forming bacterial species.

        Speaker: Octavio Reyes-Matte (Max Planck Institute for Evolutionary Biology)
      • 139
        Self-supervised learning for sample localization on cryoem grids for tomogram acquisition

        Supervised learning algorithms for image segmentation provide exceptional results in situations where they can be applied. However, their performance diminishes when the training data is limited, or the image conditions vary considerably. Such is the case of localizing suitable acquisition points for cryo-electron tomography (CryoET): the image conditions change during screening sessions, and each experiment uses a different biological target. The lack of training data would lead to a solution where the users should create the ground truth for their targets, which is time-consuming and unreliable.
        In this work, we propose using a self-supervised training approach to generate specialized segmentation networks that any user can train with minimum interaction. To achieve this, we rely on the transmissive image formation model to simplify the problem and automatically create a sufficiently descriptive training dataset. First, we disentangle each image into foreground and background; then, we apply a histogram-based segmentation to the foreground. This segmentation is ranked, and the best images create the final training dataset for the neural network. This approach requires almost no user intervention and a precision higher than 0.9.

        Speaker: Ricardo M. Sánchez L. (EMBL)
      • 140
        Simplified processing for image-based cell profiling with pollen

        Image-based cell profiling has become a fundamental approach for understanding biological function across diverse conditions. However, as bio-imaging datasets scale, existing tooling for image-based profiling has largely been co-opted from software that initially catered to biologists running smaller, and often interactive, experimental workflows. Here, we introduce pollen, a fast and efficient tool for processing image and segmentation data in image-based profiling experiments. Given images and associated bounding boxes, polygons, or masks, pollen quickly generates diverse object-level morphological images for downstream analysis. Evaluated on real-world datasets, pollen can generate object-level data over 200 times faster than the gold-standard CellProfiler. Beyond processing, pollen can also rapidly compute a variety of classical morphological descriptors (e.g. texture) or deep features (e.g. self-supervised learning) either jointly with image processing or independently on any set of images. We curate 75,180 images from 11 datasets to show how pollen can be used for processing and profiling of cell and nuclear morphology, training cell-level self-supervised models, and further analyses. pollen is available as command-line tool written in Rust with some features exposed to python via bindings.

        Speaker: Tom Ouellette (University of Toronto/Ontario Institute for Cancer Research)
      • 141
        Single protein pinpointing in light microscopy using DNA-PAINT

        Single Molecule Localization Microscopy (SMLM) surpasses diffraction limit by separating signal from individual proteins in time. Although the usual resolution limit is around 20 nm for most techniques, DNA-PAINT [1] and RESI [2] achieve the resolution at the level of individual proteins. Inferring protein positions from sets of localizations is key to examining oligomeric configurations of biomolecules. To do so, a clustering algorithm, here referred to as “Radial Clusterer” (RC), has been applied in DNA-PAINT [3] and RESI [2]. However, this algorithm is suboptimal as it requires the proteins imaged in the same round to be spaced by around 4 times the localization precision (), well above the standard resolution limit of 2.35.

        Here we present the application of Gaussian Mixture Modeling (GMM) [4] to localization clouds to pinpoint protein positions by maximum likelihood estimation in 2D and 3D data (astigmatism). GMM utilizes the unique nature of fluorophore “blinking” in DNA-PAINT, i.e., its independence (no fluorophore-fluorophore interaction) and repetitiveness (stability over long image acquisition), which result in numerous localizations that are normally distributed around the imaged targets.

        GMM is capable of resolving targets spaced by 2.35, without any need for additional hardware adaptation and in a computationally efficient fashion. GMM shows superior performance compared to RC in DNA origami and cellular data. By applying GMM to standard DNA-PAINT image of Nup96, combined with an averaging model [5], we were able to reconstruct the 16 copies of the protein in the cytoplasmic ring and retrieve the cryoEM-based molecular distances [6], so far achieved in light microscopy only by RESI. By extracting more information, GMM will boost the performance of DNA-PAINT and RESI, especially for dense targets. To maximize its impact, GMM has been implemented as a Picasso [1] plugin.

        [1] – Schnitzbauer, et al. Nature Prot. 2017
        [2] – Reinhardt, et al. Nature. 2023
        [3] – Schlichthaerle, et al. Nature Comm. 2021
        [4] – Dempster, et al. JRSS. 1977
        [5] – Wu, et al. Nature Met. 2023
        [6] – Schuller, et al. Nature. 2021

        Speaker: Rafal Kowalewski (Max Planck Institute of Biochemistry, Ludwig-Maximilians University of Munich)
      • 142
        Taggathon on BIII.eu: contributing to a unique database of image analysis tools and workflows

        Based on the NEUBIAS (bio image analysts) and COMULIS (Correlated Multimodal Imaging in Life science) community effort, we have build a data base of image analysis software, aiming to be feed and used by the community. In this workshop, we will show how to search efficiently for an existing tool in bio image analysis, how to add a new tool or ressource and how to curate an existing entry, that particpant swill then practice. We will also explain how to contribute to the knowledge model (ontology) and reuse efficiently the database, and how to contribute to the developement and evolution of BIII.

        Speaker: Perrine Paul-Gilloteaux (University of Nantes)
      • 143
        The Human Protein Atlas in OMERO: an accessible dataset for teaching purposes

        The Human Protein Atlas (HPA) is a comprehensive resource that maps the human proteome by documenting the expression and localization of proteins in human tissues, cells, and organs. It integrates various high-throughput technologies and techniques to provide detailed information about where and how proteins are expressed within the human body. Open access is one of the key features of this resource. However, getting access can become a cumbersome process in particular if data needs to be integrated into teaching activities or courses.

        We describe a workflow that enables easy and reproducible access to parts of the HPA image data. It is based on the image data repository OMERO and the “Subcellular proteins patterns"" dataset, used for the Kaggle competition “Human Protein Atlas Image Classification (2018)“. Annotating this dataset with already available information e.g. targeted subcellular localization and newly derived parameters like saturation per channel made it fully searchable via the tag-search tool and the filtering tool in the database. This greatly increases the ease of use and images can now seamlessly integrate into teaching activities. As proof of principle we showcase a course scenario measuring the expression at the nuclear rim of human cells using the images in the database and an ImageJ macro.

        Speaker: Rémy Dornier (EPFL - SV - PTECH - PTBIOP)
      • 144
        TOMOMAN•PY - A Python-Based Suite for Handling Large cryo-ET Datasets

        Cryo-electron tomography (cryo-ET) and subtomogram averaging (STA) have become the go-to methods for examining cellular structures in their near-native states.

        However, cryo-ET, adaptable to various biological questions, often requires specific data processing adjustments, rendering a one-size-fits-all approach ineffective.

        Moreover, cryo-ET is becoming increasingly complex, with more software options available, each with its strengths and challenges, especially regarding integration due to different programming languages and data formats.

        TOMOMAN•PY (TOMOgram MANager in Python) is a versatile tool that streamlines the use of diverse software packages, allowing for developing project-specific workflows.

        It is an evolution of our previous Matlab-based pipeline (TOMOMAN). However, it maintains a readable metadata format (JSON) and interfaces with external packages for preprocessing tasks. TOMOMAN•PY also facilitates metadata transfer across STA tools such as WARP and Relion.

        By documenting essential metadata, TOMOMAN•PY enhances data sharing and reproducibility of projects.

        Furthermore, it reduces computational redundancy and encourages collaborative research across disciplines.

        Based on Python, TOMOMAN•PY enables users to easily tailor software combos to their research needs and disseminate their findings more easily within the scientific community.

        Speaker: Philipp S. Erdmann (Human Technopole)
      • 145
        Too good to be true: the perils of applying image quality metrics in fluorescence microscopy

        Measuring the quality of fluorescence microscopy images is critical for applications including automated image acquisition and image processing. Most image quality metrics used are full-reference metrics, where a test image is quantitatively compared to a reference or ‘ground truth’ image. Commonly used full-reference metrics in fluorescence microscopy include NRMSE, PSNR, SSIM and Pearson’s correlation coefficient, as these are used for image quality assessment in computer vision applications. However, fluorescence microscopy datasets have significantly different characteristics to images used in other computer vision applications (‘natural scene’ images), including differences in information density, bit depth and noise characteristics. Here we discuss the sensitivity of NRMSE, PSNR, SSIM and Pearson’s correlation coefficient to fluorescence microscopy-specific features and how these can bias quality assessment. As image quality is important for downstream analysis, these metrics are often used as a proxy for how accurately biological information can be retrieved from images. However, by correlating metric values with downstream biological analyses, we show that this is not necessarily the case, and provide insight into how, when, and indeed if these metrics can be reliably used.

        Speaker: Siân Culley (King's College London)
      • 146
        Two ways of quantifying inflammation in histopathology slices of the mouse pancreas

        Non-gallstone pancreatitis is a painful and common inflammation of the pancreas without specific, causal treatment. Here, we study the effect of a novel treatment candidate on mouse pancreatitis. To determine treatment efficacy, we ask how inflamed and treated pancreases resemble their untreated and healthy counterparts in histopathology slices.

        For this, we take two approaches: For one, we automate the pathologist’s approach of quantifying features of inflammation: immune cell infiltration, cell separation due to a loss of tight junction integrity, and cell death. In short, this approach studies the state of the data based on known features.

        Second, we classify pancreases into inflamed and healthy based on machine learning on texture features. This approach studies how novel features describe a known state of the data.

        Hand-selected features demonstrate that the treatment ameliorates pancreatitis. Machine learning on texture features accurately distinguishes between healthy and inflamed but lacks sensitivity on treatment cases. This raises the question of why texture features distinguish between health and disease but are bad at grading treatment success.

        Speaker: Maria Theiss (Harvard Medical School)
      • 147
        Unsupervised Denoising for Signal-Dependent and Row-Correlated Imaging Noise

        Accurate analysis of microscopy images is hindered by the presence of noise. This noise is usually signal-dependent and often additionally correlated along rows or columns of pixels. Current self- and unsupervised denoisers can address signal-dependent noise, but none can reliably remove noise that is also row- or column-correlated. Here, we present the first fully unsupervised deep learning-based denoiser capable of handling imaging noise that is row-correlated as well as signal-dependent. Our approach uses a Variational Autoencoder (VAE) with a specially designed autoregressive decoder. This decoder is capable of modeling row-correlated and signal-dependent noise but is incapable of independently modeling underlying clean signal. The VAE therefore produces latent variables containing only clean signal information, and these are mapped back into image space using a proposed second decoder network. Our method does not require a pre-trained noise model and can be trained from scratch using unpaired noisy data. We benchmark our approach on microscopy datatsets from a range of imaging modalities and sensor types, each with row- or column-correlated, signal-dependent noise, and show that it outperforms existing self- and unsupervised denoisers.

        Speaker: Benjamin Salmon (University of Birmingham)
      • 148
        Virtual Embryo Zoo: A Web-Based Tool for Embryogenesis Cell Tracking Visualization

        We present the Virtual Embryo Zoo, an innovative web app for visualizing and sharing embryo cell tracking data. This platform empowers researchers to investigate single-cell embryogenesis of six commonly studied model organisms: Drosophila, zebrafish, C. elegans, Ascidian, mouse, and Tribolium, through an intuitive and accessible web-based interface. The Virtual Embryo Zoo viewer allows users to navigate developmental stages with a time slider, select specific cells, and trace cell lineages. Lineage selections can be shared with a simple link, making it ideal for collaboration, education, and showcasing. This platform eliminates the local setup of native software, making advanced embryo lineage tracing and in silico fate mapping accessible to everyone with a browser. Developed using TypeScript, React, Vite, and three.js, our application utilizes a specialized tracking data format for asynchronous lazy data loading and on-the-fly interactivity. The online client is freely available as well as the viewer source code and data conversion scripts – so anyone can host, visualize, and interact with their own data.

        Speaker: Teun Huijben (Chan Zuckerberg Biohub San Francisco)
      • 149
        OptiCell3D: inference of the mechanical properties of cells from 3D microscopy images

        Cell mechanics impact cell shape and drive crucial morphological changes during development and disease. However, quantifying mechanical parameters in living cells in a tissue context is challenging. Here, we introduce OptiCell3D, a computational pipeline to infer mechanical properties of cells from 3D microscopy images. OptiCell3D leverages a deformable cell model and gradient-based optimization to identify the hydrostatic pressure of the cytosol and the surface tension of each cell interface, based on 3D cell geometries obtained from microscopy images.

        OptiCell3D is implemented as a Python library. To ensure broad accessibility and interoperability with other bioimage analysis tools, we have developed a graphical user interface as a napari plugin. We validated OptiCell3D by calculating the error in mechanical parameters inferred from spheroids of cells generated with SimuCell3D, a deformable cell model [1]. Lastly, we apply OptiCell3D to study the mechanical changes driving morphogenesis during early embryo development, using time-lapse microscopy images. We anticipate that OptiCell3D will be broadly useful in studies on how the mechanical properties of cells shape their morphology and organization.
        

        [1] Runser et al., Nature Computational Science, 2024

        Speaker: Kevin Yamauchi (ETH Zurich)
    • 16:30
      🚶‍♀️ Walking time to HT From Triulza Academy to HT

      From Triulza Academy to HT

      Time to walk back from Triulza Academy to HT

    • 💻 Workshop Session #4 Human Technopole

      Human Technopole

      • 150
        Accelerating Microscopy Image Annotation with SAMJ Annotator HT Auditorium

        HT Auditorium

        Human Technopole

        The SAMJ Annotator is a new plugin for ImageJ, Fiji, and Icy that offers user-friendly access to advanced image-segmentation models. These transformer-based models, including the Segment Anything Model (SAM) variants, have shown exceptional performance in segmenting complex and heterogeneous objects in natural images.
        With SAMJ Annotator, users can easily annotate objects using manual prompts within Fiji. The tool leverages these sophisticated models to ensure accurate contours and efficient segmentation. We have integrated highly efficient SAM models to enable real-time annotation without requiring a GPU. The installation of the SAM environment in Python is fully automated through JDLL, and execution is optimized using shared memory techniques between Python and Java, facilitated by Appose.
        In our workshop, we will demonstrate how SAMJ Annotator can be utilized in a scientific context to accelerate the annotation of large 2D microscopy images. This tool is particularly valuable for creating labeled images for training datasets. We will cover seamless installation, model selection, and best practices for annotation. Additionally, we will discuss the appropriate applications of SAMJ, focusing on managing the annotation process for various types of microscopy data. Attendees will gain hands-on experience and learn to use powerful annotation tools to design efficient image-analysis workflows.

        Speaker: Caterina Fuster Barceló (Universidad Carlos III de Madrid)
      • 151
        An end-to-end image-based spatial transcriptomic pipeline in Nextflow PIT.P01.011

        PIT.P01.011

        Human Technopole

        Unlock spatial transcriptomics with our open-source pipeline, generating single-cell datasets from microscopy images. Our modular pipeline streamlines workflows by integrating image registration, segmentation, RNA peak-calling, spot-decoding and visualization steps. With our flexible framework, customize your workflow to meet project requirements. Seamlessly process data using Nextflow, ideal for cloud-based processing. Join us at the workshop to explore how you can apply our whole pipeline for spatial transcriptomics analysis or any pipeline's module for general image analysis tasks within your research project.

        Speakers: Stanislaw Makarchuk (Sanger Institute), Tong Li (Sanger Institute)
      • 152
        Annotation and Visualization of Large 3D Datasets with Paintera Lower Egg Room

        Lower Egg Room

        Human Technopole

        Paintera is a tool specializing in dense annotation of large 3D images. In this workshop, users will create multiscale label datasets, and learn to use the various tools and annotation modes that Paintera provides. This will include semi-automated segmentation with Segment Anything and 3D annotations with Shape Interpolation in arbitrary orientation, which can be leveraged to quickly generate large annotations. We will also demonstrate the Paintera tools available for manually refining annotations. Exercises will be available to provide a hands-on experience of loading data sources, converting labels, and utilizing the various features Paintera provides.

        Speaker: Caleb Hulbert (HHMI Janelia)
      • 153
        BigStitcher-Spark Upper Egg Room - Small

        Upper Egg Room - Small

        Human Technopole

        "Lightsheet microscopy is a rapidly developing and spreading technology that now enables imaging of very large, fixed samples such as adult mouse brains at single-cell. Previously, we developed the BigStitcher software that efficiently handles and interactively reconstructs large lightsheet acquisitions up to the terabyte range. However, new types of image acquisitions use modes such as stage-scanning lightsheet microscopy on expanded tissues of mice, monkey and eventually human samples, and pose new challenges in terms of dataset size and processing time. At the same time cloud-based processing is becoming standard.

        Therefore, we developed BigStitcher-Spark (https://github.com/JaneliaSciComp/BigStitcher-Spark), which allows to run the entire BigStitcher pipeline distributed (local, cluster, cloud). It seamlessly switches between the BigStitcher GUI and distributed Spark execution, provides 10x faster fusion, and enables expert-user automation using the command line and easily accessible code.

        In the workshop, we will teach how to use BigStitcher-Spark on smaller toy examples, which includes execution and viewing on the cloud for potentially massive parallel processing for very large datasets."

        Speakers: Michael Innerberger (Janelia Research Campus), Stephan Preibisch (HHMI Janelia), Tobias Pietzsch (Janelia Research Campus)
      • 154
        DL4MicEverywhere: Making your deep learning pipelines flexible, shareable, and reproducible PIT.P01.026

        PIT.P01.026

        Human Technopole

        DL4MicEverywhere is a platform that offers researchers an easy-to-use gateway to cutting-edge deep learning techniques for bioimage analysis. It features interactive Jupyter notebooks with user-friendly graphical interfaces that require no coding skills. The platform utilizes Docker containers to ensure portability and reproducibility, guaranteeing smooth operation across various computing environments (local personal devices like laptops, remote high-performance computing (HPC) platforms and cloud-based systems). In this workshop we will quickly show the functionality of DL4MicEverywhere for non-expert users and showcase the containerisation and creation of easily reproducible new pipelines.

        Speakers: Estibaliz Gómez-de-Mariscal (Instituto Gulbenkian de Ciência), Iván Hidalgo-Cenalmor (Instituto Gulbenkian de Ciência)
      • 155
        motile: Multi-Object Tracking with Integer Linear Equations Mezzanine Room

        Mezzanine Room

        Human Technopole

        We will present "motile" (https://funkelab.github.io/motile/) and its napari plugin for the tracking of objects in image sequences. The plugin allows users to go from a segmentation (e.g., of cells or nuclei) to a full tracking solution (e.g., a lineage tree) within a few clicks. Motile uses integer linear programs (ILPs) to solve tracking problems and comes with many useful costs and constraints included. This workshop will consist of two parts: First, we will demonstrate the GUI of the napari plugin and how it can be used for several tracking problems with different requirements. In the second part, we will show how using motile (the underlying library) avoids writing complicated ILPs by hand and how to extend it with custom costs and constraints.

        Speakers: Anniek Stokkermans (Hubrecht Institute), Carolin Mailin-Mayor (HHMI Janelia)
      • 156
        QuPath for Beginners Upper Egg Room - Big

        Upper Egg Room - Big

        Human Technopole

        QuPath is a popular open-source platform for visualizing, annotating and analyzing complex images - including whole slide and highly multiplexed datasets. It's especially useful for digital pathology applications, but can also be used for a wide range of other tasks (albeit mostly 2D... for now).
        This work will introduce QuPath's main features, concepts and interface. It will show examples of problems QuPath can solve well on its own, as well as introduce the idea of scripts, extensions and integrations that can help integrate QuPath into more complex workflows when needed.

        Speaker: Fiona Inglis (University of Edinbrugh)
      • 157
        ScaleFEx℠,: Single cell feature extraction from large scale datasets PIT.P02.029

        PIT.P02.029

        Human Technopole

        ScaleFEx℠ is an open-source Python pipeline designed for extracting biologically relevant features from single cells in large image-based high-content imaging screens. Optimized for efficiency, it can easily handle datasets containing millions of images and several Tbs of data with reduced overhead and minimal redundancy. In our upcoming workshop, we will demonstrate how to install, configure, and deploy ScaleFEx℠ using various publicly accessible datasets.

        Speaker: Bianca Migliori (The New York Stem Cell Foundation)
    • 18:00
      🍹 Social Happy Hour Human Technopole

      Human Technopole

    • ✨ Invited Speaker: Explorations in embryogenesis (Hari Shroff, HHMI Janelia) HT Auditorium

      HT Auditorium

    • 09:30
      🚶‍♀️ Shuffle time Human Technopole

      Human Technopole

      Time to move from the Auditorium to the meeting rooms for the Workshop Session

    • 💻 Workshop Session #5 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 158
        BiaPy: deep learning based Bioimage Analysis for all audiences Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        This session will provide hands-on experience with BiaPy's user-friendly workflows, zero-code notebooks, and Docker integration, empowering participants to tackle complex bioimage analysis tasks with ease. Whether you are a novice or an experienced developer, discover how BiaPy can streamline your image analysis processes and enhance your research capabilities in life sciences.

        Speaker: Ignacio Arganda-Carreras (Universidad del Pais Vasco)
      • 159
        Connecting software tools into reproducible workflows with CellProfiler plugins Upper Egg Room - Small (Human Technopole)

        Upper Egg Room - Small

        Human Technopole

        In this workshop, we will explore CellProfiler's options for interfacing with other open-source tools, including Cellpose, ImageJ, and ilastik. We will introduce resources on creating additional CellProfiler plugins, but focus will be on the use of existing interfaces and plugins. Basic CellProfiler comfort not 100% mandatory but strongly encouraged.

        Speaker: Beth Cimini (Broad Institute)
      • 160
        Lazy parallel processing and visualization of large data with ImgLib2, BigDataViewer, the N5-API, and Spark Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        Modern microscopy and other scientific data acquisition methods generate large high-dimensional datasets exceeding the size of computer main memory and often local storage
        space. In this workshop, you will learn to create lazy processing workflows with ImgLib2, using the N5 API for storing and loading large n-dimensional datasets, and how to use Spark to parallelize such workflows on compute clusters. You will use BigDataViewer to visualize and test processing results. Participants will perform practical exercises, and learn skills applicable to a high performance / cluster computing.

        Speakers: John Bogovic (HHMI Janelia), Stephan Saalfeld (HHMI Janelia)
      • 161
        Non linear registration of whole slide images with Fiji and QuPath PIT.P02.029 (Human Technopole)

        PIT.P02.029

        Human Technopole

        This workshop will provide hands-on demonstrations of non-linear image registration workflows essential for various applications in biomedical research (multi-modal imaging, thin slices alignment to 3D atlas, and cyclic IF).

        We will showcase the use of imglib2-based tools such as BigWarp, Warpy, and ABBA, highlighting their capabilities in performing complex image registrations. Attendees will learn how to extract and apply transformation functions across different platforms, facilitating seamless integration with downstream analysis tools like QuPath (using Groovy) and python.

        Speaker: Olivier Burri (EPFL)
      • 162
        Recent Advances in ilastik PIT.P01.026 (Human Technopole)

        PIT.P01.026

        Human Technopole

        This workshop we will show the latest advancements in ilastik, a user-friendly machine learning-based image analysis software that requires no prior machine learning expertise. We will explore how to work with multiscale ome-ngff data (ome-zarr), enhancing ilastik's ability to handle large and complex datasets. Additionally, we will discuss improvements in integration with segmentation tools like cellpose, or stardist, particularly focusing on better support for label image inputs in object classification. Further, we will demonstrate three more advanced workflows of ilastik: Boundary-Based segmentation (Multicut), Trainable Domain Adaptation, and Neural Network Classification including their integration with the deep learning tools of the BioImage Model Zoo. Finally, we want to show recent developments on integration of ilastik with Python image analysis workflows.

        Speakers: Benedikt Best (EMBL), Dominik Kutra (EMBL)
      • 163
        Segment Anything using Python for Microscopy Data Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        The Segment Anything Model (SAM) by Meta is a transformer model trained on 11 million images and 1.1 billion masks, known for its exceptional generalization and zero-shot segmentation capabilities. In under a year, SAM has been cited in over 2,500 papers and has been widely adopted for specialized tasks like medical imaging and microscopy. This proposed course will provide hands-on guidance for installing and running SAM on Google Colab, exploring its model parameters, and learning to interpret and utilize its output effectively. While various plugins exist, Python remains a powerful tool, providing users with greater control over the entire segmentation process. The course will primarily focus on Python but will also introduce some published plugins to participants, ensuring a comprehensive understanding of SAM's application and versatility.

        Speaker: Ranit Karmakar (Harvard Medical School)
      • 164
        Tif2Blender: 3D image visualization in Blender PIT.P01.011 (Human Technopole)

        PIT.P01.011

        Human Technopole

        To render 3D microscopy data in a 3D environment, we often deal with proprietary software or tools with limited functionality. With tif2blender, we can use the open-source and highly versatile 3D modeling software Blender (also used for high-budget movies and games) to render microscopy stacks.
        My aim for this workshop is to familiarize participants with the Blender user interface, guide them through the installation of the tif2blender add-on, and showcase the capabilities of the Blender for microscopy visualization. By the end of the session, attendees will know how to create animations using microscopy data in Blender, and showcase advanced features, including programmatic creation of annotations, interaction with label masks, and protein structure loading.

        Speaker: Oane Gros (EMBL)
      • 165
        ZEISS arivis: the unique platform for your image analysis HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        Extracting information from the acquired images is the final, necessary step to obtain quantitative information from your Microscopy experiments.
        This process is known as Image Analysis and group a series of complex approaches requiring specialized personnel with solid computer science background and, usually, good programming skills. In this workshop we will introduce ZEISS arivis, the unique and agnostic software platform that guides you through the image analysis workflow, ensuring success when your data challenge you. We will embark on an exciting expedition where we will learn how ZEISS arivis can be at your side in any Image Analysis challenge, whether you are beginner or a very experienced user.
        Initially, we will start by briefly introducing the basic concepts in image analysis, and later we will introduce the arivis Pro GUI structure. Through a series of examples based on real world data sets coming from volume electron and fluorescence microscopy, we will learn how to set up an automated analysis and how to use the latest innovations in Artificial Intelligence (AI), directly into arivis Cloud, to succeed in the most complex tasks. Later, we will learn how to integrate AI models into the ZEISS Ecosystem and how to extend the capabilities of ZEISS arivis through the integration of external scripts based on Python programming languages.

        Speaker: Riccardo Tognato (ZEISS)
    • 11:15
      ☕ Coffee break HT Auditorium / Covered Piazza

      HT Auditorium / Covered Piazza

    • ✨ Invited Speaker: Title to be announced (Josh Moore, German Bioimaging) HT Auditorium

      HT Auditorium

    • 12:15
      🚶‍♀️ Shuffle time Human Technopole

      Human Technopole

      Time to move from the Auditorium to the meeting rooms for the Workshop Session

    • 💻 Workshop Session #6 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 166
        AI4Life: Empowering BioImaging through the BioEngine Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        Join us for an engaging hands-on workshop at the Image to Knowledge Conference, where we will introduce participants to our collective efforts in the AI4Life project, a Horizon Europe-funded initiative to make AI tools accessible to the bioimage community and bridge the gap between life scientists and computational experts. This workshop brings the opportunity for participants to discover and get started using the BioImage Model Zoo (https://bioimage.io) for sharing AI models, the BioEngine (https://ai4life.eurobioimaging.eu/announcing-bioengine/) for running models in the cloud, and collaborative annotation tools.

        Participants will gain an in-depth understanding of the following key components of the AI4Life project:

        • Using AI Models via the BioEngine in Python: Learn how to interact with the BioEngine server using Python API. You will learn how to send images to the BioEngine server to run a selected model in the bioimage model zoo and obtain results in your own image analysis workflow.
        • Collaborative Image Annotation Tool: Explore our innovative tool for crowd-sourced image annotation among many users, featuring the Segment Anything Model for interactive annotation. This tool facilitates efficient and accurate image labeling, enabling collaborative efforts to enhance dataset quality and annotation precision.

        Designed for life science researchers, computational scientists, and anyone interested in harnessing AI for bioimage analysis, this workshop will equip participants with practical skills on cutting-edge open-source technology to apply AI methods in their own research, contributing to the rapid pace of biomedical discoveries.
        Join us to unlock the potential of AI in bioimage analysis and be part of the AI4Life journey towards a future of enhanced scientific insights and breakthroughs.

        Speakers: Estibaliz Gómez-de-Mariscal (Instituto Gulbenkian de Ciência), Nils Mechtel (KTH Royal Institute of Technology)
      • 167
        Annotating and sharing large images with OME-Zarr and WEBKNOSSOS PIT.P01.026 (Human Technopole)

        PIT.P01.026

        Human Technopole

        OME-Zarr is a community-backed file format that is well suited for storing and sharing images. WEBKNOSSOS is an open-source tool for visualizing, annotating and sharing large multi-dimensional images. In this workshop, we will give an intro to the OME-Zarr format and provide a hands-on walkthrough the basics of WEBKNOSSOS including data upload, volume annotation (including AI-based segment-anything), skeleton annotation and interoperability with other tools (e.g. Fiji, napari, Neuroglancer) through OME-Zarr.

        Speaker: Norman Rzepka (Scalable Minds)
      • 168
        coLoↄ: A Collaborative Board Game for Teaching Co-Localization Analysis PIT.P01.011 (Human Technopole)

        PIT.P01.011

        Human Technopole

        During the game the players (The Scientists), team up against a game master (Reviewer#3). For a prompted biological question, the Scientists have to prepare an Experiment Plan using a deck of cards. Their plan will be reviewed and challenged by Reviewer#3 using their own deck. During this workshop attendees will play coLoↄ the board game in small teams of Scientist and the Reviewer will either be played by an organizer or a volunteer.

        Speaker: Romain Guiet (EMBL)
      • 169
        Introducing ultrack: cell tracking in python under segmentation uncertainty Upper Egg Room - Small (Human Technopole)

        Upper Egg Room - Small

        Human Technopole

        In this interactive workshop, we will explore ultrack, a cutting-edge multi-hypothesis tracking framework designed for cell tracking across multiple microscopy modalities. Participants will begin by learning the core principles that drive ultrack's robust performance, including its ability to handle multiple input types such as intensity images, segmentation masks, contours or a combination of them. The session will progress through segmentation and tracking a dataset from scratch where an single off-the-self cell segmentation model fails. This hands-on experience will equip attendees with the skills to implement this powerful tool in their research workflows.

        Speakers: Jordão Bragantini (CZ Biohub), Teun Huijben (Chan Zuckerberg Biohub San Francisco)
      • 170
        Introduction to AI4Life and BioImage Model Zoo Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        Together with the participants, we will explore the BioImage Model Zoo: an accessible, user-friendly repository of FAIR, pre-trained deep learning models. We will go through the current model collection, check models by their descriptions in cards, try them out on the website and in partner desktop tools such as ilastik and DeepImageJ. We will communicate with the AI chatbot for tips on image analysis and learn about other projects of the AI4Life team.

        Speakers: Anna Kreshuk (EMBL), Theodoros Katzalis (EMBL)
      • 171
        Phasor analysis of Fluorescence Lifetime Microscopy data with FLUTE PIT.P02.029 (Human Technpole)

        PIT.P02.029

        Human Technpole

        In this workshop we propose to learn the principles of phasor analysis of Fluoresce Lifetime Microscopy (FLIM) data as well as to perform image analysis of FLIM data with this approach using (F)luorescence (L)ifetime (U)l(t)imate (E)xplorer (FLUTE) a new open source and user-friendly GUI that we recently developed [1,2]. Using FLUTE, participants will learn the basis of FLIM calibration, phasor plotting, FLIM data interactive visualization and exploration, data interpretations and advanced phasor analysis and quantification. We also propose to explore and understand the impact of the most important parameters and factors of FLIM acquisition, such as the number of photons, as well as sources of bias such as the presence of background. The phasor analysis will be performed on a variety a FLIM data acquired in the time domain from different fluorophores, autofluorescence from zebrafish embryo and cells in culture. Example of functional metabolic images will be explored. It will also be possible to analyze FLIM data that users will bring from their labs and/or acquired with different FLIM commercial systems.

        [1] D Gottlieb, B Asadipour, P. Kostina, TPL Ung, C Stringari. FLUTE: a Python GUI for interactive phasor analysis of FLIM data. Gottlieb D, Asadipour B, Kostina P, Ung TPL, Stringari C. FLUTE: A Python GUI for interactive phasor analysis of FLIM data. Biological Imaging. 2023;3:e21.
        [2] https://github.com/LaboratoryOpticsBiosciences/FLUTE

        Speaker: Chiara Stringari (École polytechnique - Laboratory for Optics & Biosciences)
      • 172
        pymmcore-plus: a pure python way to control your microscope with Micro-Manager. HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        The workshop aims to introduce pymmcore-plus, a Python package designed for controlling microscopes via the open-source software Micro-Manager within a pure Python environment. It is an extension of pymmcore, the original Python 3.x bindings for the C++ Micro-Manager core and, as such, it operates independently of Java, eliminating the need for Java dependencies. Throughout the workshop, we'll explore the main features of pymmcore-plus including a multi-dimensional acquisition engine implemented in pure Python that facilitates "on-the-fly" image processing/analysis and enables "smart microscopy" capabilities. Since pymmcore-plus does not rely on the Java Graphical User Interface (GUI), we will also present a related package, named pymmcore-widgets, which provides a collection of Qt-based widgets that can be used in combination with pymmcore-plus to create custom GUIs. We will conclude the workshop, by providing a brief overview of napari-micromanager, a plugin for the napari image viewer that combines the functionalities of pymmcore-plus and pymmcore-widgets, furnishing users with a comprehensive GUI solution for Micro-Manager. Documentation: https://pymmcore-plus.github.io/pymmcore-plus/.

        Speaker: Federico Gasparoli (Harvard University)
      • 173
        Using BigTrace for the analysis of curvilinear biological structures Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        In this workshop we will go over basics of using BigTrace plugin to extract and analyze filament/vessel/neurite-like structures in large 3D multicolor (timelapse) datasets. It will be focused on volumetric dataset navigation, creation of ROIs and their analysis: measurements of intensity and geometrical properties, subvolume extraction and post-processing.

        Speaker: Eugene A. Katrukha (Utrecht University)
    • 13:30
      🍝 Lunch Human Technopole - Covered Piazza

      Human Technopole - Covered Piazza

    • 💻 Workshop Session #7 Human Technopole - Meeting Rooms

      Human Technopole - Meeting Rooms

      • 174
        conv-paint: an easy to train interactive pixel classifier for napari Upper Egg Room - Big (Human Technopole)

        Upper Egg Room - Big

        Human Technopole

        We will learn how to use conv-paint, a fast and interactive pixel classification tool for multi dimensional images. A graphical user interface is integrated into the image viewer napari, but we will also learn how to script the software from the python ecosystem. As a napari-plugin, conv-paint can easily be integrated with other plugins into complex image processing pipelines, even by users unfamiliar with coding. Advanced developers will get pointers in how to integrate their own feature extractors into the conv-paint architecture.

        Speaker: Lucien Hinderling (Institute of Cell Biology - Universität Bern)
      • 175
        Denoising microscopy images with CAREamics Mezzanine Room (Human Technopole)

        Mezzanine Room

        Human Technopole

        CAREamics is a modern re-implementation of widely used algorithms (CARE, N2N, N2V) and of more recent methods (DivNoising, HDN, muSplit, etc.), with a strong emphasis on usability for non-specialists. In this workshop, we will walk users through standard pipelines, how to approach a denoising task, which methods to choose and the various ways to use CAREamics, including via its napari UI.

        Speaker: Joran Deschamps (Human Technopole)
      • 176
        Exploiting NanoPyx’s Liquid Engine to accelerate image analysis pipelines PIT.P02.029 (Human Technopole)

        PIT.P02.029

        Human Technopole

        This workshop will delve into the innovative features of NanoPyx, a Python framework designed to streamline microscopy image analysis. Participants will have hands-on experience on the image analysis methods implemented in NanoPyx, such as image registration, super-resolution and quality metrics, using the multiple user interfaces of NanoPyx: napari plugin, Jupyter notebooks and python library. Attendees will also learn how to implement their own Liquid Engine methods, through the usage of a Cookiecutter template. By the end of the workshop, participants will have gained practical insights into leveraging NanoPyx's adaptive nature and self-optimization capabilities to enhance the efficiency and scalability of their bioimaging experiments.

        Speaker: Bruno Manuel Santos Saraiva (Instituto Gulbenkian de Ciência)
      • 177
        FeatureForest: leveraging foundational models for user guided semantic segmentation Lower Egg Room (Human Technopole)

        Lower Egg Room

        Human Technopole

        We believe that FeatureForest addresses some of the shortcomings of both deep learning approaches and classical classifiers. While not a novelty, coupling deep-learning features and random forest allows complex biological objects to be segmented easily, without the need for extensive manual data labeling or expert deep-learning knowledge. We want to advertise the tool as it can be helpful to many researchers and want to gather feedback directly from the community.

        Speaker: Mehdi Seifi (Human Technopole, HT)
      • 178
        Intro to MemBrain v2: Membrane Analysis in Cryo-ET PIT.P01.011 (Human Technopole)

        PIT.P01.011

        Human Technopole

        In this workshop, we would like to introduce the functionalities of MemBrain v2 and demonstrate its handling using some showcase examples.
        We will introduce MemBrain v2's three main components: MemBrain-seg is our out-of-the-box U-Net that gives good segmentations on a wide variety of tomograms. MemBrain-pick closely interacts with an annotation software to enable interactive training and annotation of a membrane protein localization network. MemBrain-stats combines the outputs of the previous components and aids the evaluation of the analyzed membranes.

        Speakers: Joel Valdivia Ortega (Helmholtz Zentrum München), Lorenz Lamm (Helmholtz Zentrum München)
      • 179
        QuPath for Groovy Scripters HT Auditorium (Human Technopole)

        HT Auditorium

        Human Technopole

        QuPath is a popular open-source platform for visualizing, annotating and analyzing complex images - including whole slide and highly multiplexed datasets. QuPath is written in Java and scriptable with Groovy, which makes it portable, powerful, and... sometimes a pain if you aren't a programmer. It can also be annoying if you are a programmer, but you find Groovy's syntax weird { and impenetrable } and QuPath's API even worse.
        This workshop will introduce the key concepts of scripting in QuPath, demonstrate what is possible, and show some Groovy tips and tricks. It will also explore some of the implications of QuPath's design, to explain why some things that are hard in other software might be easy in QuPath - and vice versa.

        Speaker: Léo Plat (University of Edinbrugh)
      • 180
        Taggathon on BIII.eu: contributing to a unique database of image analysis tools and workflows Upper Egg Room - Small (Human Technopole)

        Upper Egg Room - Small

        Human Technopole

        Based on the NEUBIAS (bio image analysts) and COMULIS (Correlated Multimodal Imaging in Life science) community effort, we have build a data base of image analysis software, aiming to be feed and used by the community. In this workshop, we will show how to search efficiently for an existing tool in bio image analysis, how to add a new tool or ressource and how to curate an existing entry, that particpant swill then practice. We will also explain how to contribute to the knowledge model (ontology) and reuse efficiently the database, and how to contribute to the developement and evolution of BIII.

        Speaker: Perrine Paul-Gilloteaux (University of Nantes)
    • 15:00
      🚶‍♀️ Shuffle time Human Technopole - HT Auditorium

      Human Technopole - HT Auditorium

      Time to go to the HT Auditorium

    • 👨‍🔬 Selected Talks HT Auditorium

      HT Auditorium

      • 181
        Too good to be true: the perils of applying image quality metrics in fluorescence microscopy Human Technopole (HT Auditorium)

        Human Technopole

        HT Auditorium

        Measuring the quality of fluorescence microscopy images is critical for applications including automated image acquisition and image processing. Most image quality metrics used are full-reference metrics, where a test image is quantitatively compared to a reference or ‘ground truth’ image. Commonly used full-reference metrics in fluorescence microscopy include NRMSE, PSNR, SSIM and Pearson’s correlation coefficient, as these are used for image quality assessment in computer vision applications. However, fluorescence microscopy datasets have significantly different characteristics to images used in other computer vision applications (‘natural scene’ images), including differences in information density, bit depth and noise characteristics. Here we discuss the sensitivity of NRMSE, PSNR, SSIM and Pearson’s correlation coefficient to fluorescence microscopy-specific features and how these can bias quality assessment. As image quality is important for downstream analysis, these metrics are often used as a proxy for how accurately biological information can be retrieved from images. However, by correlating metric values with downstream biological analyses, we show that this is not necessarily the case, and provide insight into how, when, and indeed if these metrics can be reliably used.

        Speaker: Siân Culley (King's College London)
      • 182
        Interpreting Microscopy Images with Machine Learning Human Technopole (HT Auditorium)

        Human Technopole

        HT Auditorium

        Machine learning (ML) is becoming pivotal in life science research, offering powerful tools for interpreting complex biological data. In particular, explainable ML provides insights into the reasoning behind model predictions, highlighting the data features that drove the model outcome. Our work focuses on building explainable ML models for microscopy images. These models not only classify cell fates but also reveal the underlying data patterns and features influencing these classifications. Specifically, we have developed models to classify individual lung cancer cell fates, such as proliferation and death, from live-cell microscopy data. By leveraging explainable ML techniques, we gained insights into the decision-making process of these models, revealing the key cellular features that determine whether a cell would proliferate or die. The combination of ML and specialised image acquisition will enable us to address specific biological questions and uncover novel insights about underlying cellular mechanisms. This work demonstrates the potential of explainable ML in enhancing our understanding of complex biological processes, and how we can gain novel knowledge from images.

        Speaker: Inês Martins Cunha (SciLifeLab, University of Stockholm)
      • 183
        OptiCell3D: inference of the mechanical properties of cells from 3D microscopy images Human Technopole (HT Auditorium)

        Human Technopole

        HT Auditorium

        Cell mechanics impact cell shape and drive crucial morphological changes during development and disease. However, quantifying mechanical parameters in living cells in a tissue context is challenging. Here, we introduce OptiCell3D, a computational pipeline to infer mechanical properties of cells from 3D microscopy images. OptiCell3D leverages a deformable cell model and gradient-based optimization to identify the hydrostatic pressure of the cytosol and the surface tension of each cell interface, based on 3D cell geometries obtained from microscopy images. OptiCell3D is implemented as a Python library. To ensure broad accessibility and interoperability with other bioimage analysis tools, we have developed a graphical user interface as a napari plugin. We validated OptiCell3D by calculating the error in mechanical parameters inferred from spheroids of cells generated with SimuCell3D, a deformable cell model [1]. Lastly, we apply OptiCell3D to study the mechanical changes driving morphogenesis during early embryo development, using time-lapse microscopy images. We anticipate that OptiCell3D will be broadly useful in studies on how the mechanical properties of cells shape their morphology and organization.

        [1] Runser et al., Nature Computational Science, 2024

        Speaker: Kevin Yamauchi (ETH Zurich)
    • ✨ Invited Speaker: Title to be announced (Aubrey Weigel, HHMI Janelia) HT Auditorium

      HT Auditorium

    • 16:30
      ☕ Coffee break Human Technopole - Covered Piazza

      Human Technopole - Covered Piazza

    • 🎤 Panel Discussion HT Auditorium

      HT Auditorium

    • 🎤 Closing Remarks HT Auditorium

      HT Auditorium