Imaging of living cells over time provides unparalleled data of the dynamic cellular characteristics that give rise to complex biological functions. Quantitative time lapse microscopy applied to the analysis of individual cells used to require labor-intensive custom designed image analysis algorithms to segment and track single cells. Deep learning promises to dramatically change this paradigm. Our work is focuses on elucidating the generalizability and limitations of the training of models as a function of the imaging system, cell type, culture conditions, and other factors. Methods are needed to rapidly obtain large amounts of training data, and to test, compare and validate models. Our program components are: 1) Very high-speed time lapse image acquisition, 2) Storage, analysis and management of time lapse image data, and 3) testing and validating pipelines for analysis of live cell image data.
We are exploring methods for very high-speed cell image data collection, which is enabling the repeated sampling of thousands of cells every 2 minutes. Rapid sampling allow us to observe dynamic processes in cells (such as cell division) on a relevant time scale. Such rates of sampling allows us to track unlabeled cells over long periods in culture. This effort involves collecting and handling very large image datasets, and developing and validating analysis pipelines that use deep learning.
Quantitative time lapse microscopy applied to the analysis of cellular populations requires sufficient spatial resolution to identify individual cells, then sufficient temporal sampling for tracking of those cells as they grow, move and divide. A fundamental challenge to timelapse imaging the inverse relationship between temporal sampling rate that can be achieved and the area that can be imaged within the time lapse interval (a.k.a. the spatial bandwidth product of the imaging system).
To simultaneously obtain high spatial coverage and temporal sampling, we are working with technology developed by Inscoper that eliminates software latency to optimize the rate at which an automated acquisition workflow can be executed and maximize the spatial bandwidth product of a traditional microscope [Figure 1]. We are also developing acquisition and analysis workflows around continuous motion imaging. In this operational mode, the motorized x-y stage moves continuously, and the illumination and camera are event synchronized for a rapid acquisition sequence. Time lapse image data can be acquired ~50-fold faster compared with the traditional “stop-and-stare” operational mode of a multi-field-of-view workflow. Images are acquired every ~12ms or a whole 6-well plate in ~3 minutes or ~12x108 cells every 3 minutes. A description of the acquisition method and implementation can be found here.
Another method to achieve high spatial bandwidth product imaging is Fourier ptychography microscopy. We have capabilities and interest in developing workflows for dynamic cellular analysis based on this cutting-edge technology [show image of FPM instrument]. With high spatial bandwidth product imaging capabilities, bioscience researchers can explore a large number of applications including the impact of drugs on cell survival, mitosis or the motion of single cells. The dynamic interactions of gene regulatory networks components could also potentially be examined with very high speed time lapse image acquisition (Plant, Halter, 2020).
Very high-speed acquisition of time lapse image data results in the weekly production of experimental datasets on the order of 1-10 TB’s. We are developing a laboratory with state-of-the-art data tools that leverage the power of deep learning (i.e. CNN’s) for better quantification and prediction of complex biological processes. Deep learning models, in addition to classical image analysis, provide another tool for the quantitative analysis of cellular attributes from microscopic image data. These models require high quality, unbiased training data and computational hardware and software to achieve the model ‘learning’ so that can then be used for cell image analysis. Because deep learning models do not require rules-based algorithm development as in classical image analysis, we are developing a laboratory with state-of-the-art data tools to advance the application of deep learning for cellular analysis:
To fully realize the potential of deep learning for large-scale bioimage analysis, we are collaborating with computational scientists to develop strategies to build and deploy trusted A.l. analysis pipelines. Part of this work involves building and testing high speed imaging systems (see above) for rapidly generating data at an appropriate scale for training new models, then testing models over a range of data qualities (e.g. cell density). Another aspect is designing experimental data that has a low ambiguity and can serve as ‘ground truth’ for validating a quantitative time lapse imaging workflow.
The following questions drive our work in advancing these complex measurement systems: When can we have confidence in the quantitative output of a deep learning A.I. model? Under what conditions does the model fail? We are also evaluating the effect of training data on A.I. model performance characteristics such as accuracy, reliability, robustness and bias. These systematic studies involve the acquisition of image datasets under varying conditions for the training and testing of A.I. models.
With trusted A.I. systems, fit-for-purpose cellular measurements of greater complexity than ever before are possible.
DISCLAIMER: Certain commercial equipment, instruments, or materials (or suppliers, or software) are identified in this webpage to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.