Live-cell imaging in the deep learning era Joanna W. Pylvänäinen1, Estibaliz Gómez-de-Mariscal2, Ricardo Henriques2,3 and Guillaume Jacquemet1,4,5,6 Abstract Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is chang- ing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy. Addresses 1 Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland 2 Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal 3 University College London, London WC1E 6BT, United Kingdom 4 Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland 5 InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland 6 Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland Corresponding author: Jacquemet, Guillaume (guillaume.jacquemet@ abo.fi) Current Opinion in Cell Biology 2023, 85:102271 This review comes from a themed issue on Cell Dynamics 2023 Edited by Kandice Tanner and Anne Straube For complete overview of the section, please refer the article collection - Cell Dynamics 2023 Available online 27 October 2023 https://doi.org/10.1016/j.ceb.2023.102271 0955-0674/© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/). Introduction Live imaging helps us understand life’s complexity by recording how tissues, cells, and molecules behave over time. Yet, implementing live imaging and analyzing mi- croscopy videos remain challenging (Figure 1a). Firstly, live imaging is frequently susceptible to drift, leading to an unwanted sample displacement over time. Secondly, when using fluorescence microscopy, balancing between imaging frequency, resolution, and specimen health is critical and challenging [1]. Finally, live imaging experi- ments tend to generate an avalanche of data that can be hard to extract and analyze (Figure 1a). To mitigate some of these issues, there is ongoing work on hardware improvement. For example, gentler illu- mination strategies and more sensitive detectors can reduce phototoxicity [2,3]. However, hardware im- provements are only part of the solution. Increasingly, powerful software advancements enhance microscopy, providing us with more information from our samples (e.g., Refs. [4e6]). Over the past few years, significant strides have been made in data processing tools, broadly categorized into 1) tools aiming at improving live im- aging datasets and 2) tools aiming at extracting quanti- tative information from live imaging datasets (Figure 1b). Many of these tools now operate through deep learning (DL), a subfield of artificial intelligence that can autonomously identify relevant image features to carry out specific tasks. This review highlights con- cepts and recent tools useful for researchers interested in live imaging. The tools and articles highlighted are selected based on our experience working with live imaging data and our enthusiasm for this rapidly evolving field. This review is not exhaustive, instead we aim to offer a concise overview of available tools to inspire and empower users. Deep learning and video analysis DL is revolutionizing our ability to analyze microscopy images (see Refs. [7,8] for in-depth reviews). When using DL, a multi-layer artificial neural network, also known as a Deep Neural Network (DNN), is first trained on a dataset to create a “model” capable of executing a spe- cific bioimage analysis task (Figure 2a). Once trained, the model can then be used on similar images. Because of this, the training step is essential as it dictates the per- formance and specificity of the DNN [9]. When selecting a DL method for processing live imaging data, users must consider the type of training data the chosen approach requires, along with the dimensionality of their data. Typically, DL methods are trained in a Available online at www.sciencedirect.com ScienceDirect www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- Figure 1 Current Opinion in Cell Biology a b Live-cell imaging main challenges and computational solutions. (a) Live fluorescence imaging presents unique challenges that require a careful balance between managing light sensitivity and ensuring optimal spatial, temporal, and spectral resolution to observe intended biological phenomena accurately. Upon data acquisition, researchers need to select the most effective methods to derive biological insights from their video, with strategies spanning from manual analysis to turn-key solutions or custom-developed analysis pipelines. Each approach has strengths and limitations, particularly throughput, speed and accuracy. This figure, illustrated as spider plots, underscores the need for trade-offs in acquiring and analyzing live imaging data. (b) Computational tools designed to handle live cell imaging datasets can be primarily divided into two categories: (i) tools that improve live cell imaging data and mitigate phototoxicity and (ii) tools that facilitate data extraction and analysis. The former category includes methods for drift correction, denoising, resolution enhancement, and artificial labeling. The latter encompasses segmentation, object detection, and tracking tools, followed by time series analysis. Integrating these tools into microscope acquisition software to autonomously control microscope acquisition parameters paves the way for self-driving microscopes. The tool categories are displayed in no particular order, as their use depends on the datasets and needs. The central arrow illustrates that self-driving microscopes can dynamically utilize these approaches to control microscope acquisition parameters. 2 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- supervised manner and rely on paired sets of images. However, alternate strategies to deal with unpaired datasets exist, such as unsupervised and self-supervised training (Figure 2b). Depending on the data available, these methods may offer additional flexibility. A key characteristic of microscopy videos is the inherent consistency of information across sequential frames. Exploiting this temporal consistency can significantly enhance the precision of data analysis. DL algorithms, adept at handling multi-dimensional data, are particu- larly effective in analyzing 3D microscopy datasets. However, the current landscape of DL methods for bioimage analysis is focused on 2D and 3D volumetric datasets. As a result, analyzing 2D, 3D, and 4D videos using DL is often performed frame by frame, which overlooks the time consistency in the data. 3D volu- metric approaches can be used to process 2D videos, but this might not fully harness the potential of the data and is generally suboptimal in terms of memory (Figure 2c). Drift and bleach correction Microscopy videos must often be corrected to ensure consistency across time frames. This can include Figure 2 Current Opinion in Cell Biology a b c Deep learning and video analysis. (a) The DL pipeline. A DL model must first be trained using a training dataset. This step is generally time-consuming and takes hours to weeks, depending on the size of the training dataset. Once trained, a model can be directly applied to other images and generate predictions. This second step is generally much faster (seconds to minutes). (b) Type of training datasets. In a supervised training fashion, a collection of representative input images, each coupled with their anticipated results (i.e., the ground truth), is given to the DNN. Here, the training dataset includes matching pairs of noisy and high signal-to-noise ratio images. Alternate training methods include unsupervised training, where the model is trained with inputs and outputs not necessarily from the same field of view, and self- supervised training, where paired datasets are generated solely from the input images. (c) DL and data dimensions. Live cell imaging datasets can have multiple dimensions. Given that DL tools for bioimage analysis are typically designed to handle up to three dimensions, applying these tools to video processing necessitates varied strategies, contingent on the number of dimensions present in the data for processing. Here a 2D model represent a model capable to process 2D data. A 3D model is capable to process 3D data. The microscopy images displayed for all panels are breast cancer cells labeled with silicon rhodamine DNA to visualize the nuclei and imaged using a spinning disk confocal microscope. Live-cell imaging in the deep learning era Pylvänäinen et al. 3 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- removing unwanted drift and upholding image quality throughout the video. While DL algorithms can perform these tasks, they are generally not used for this purpose due to their slow speed or the lack of appropriate training datasets. Drift correction accounts for un- wanted shifts in the position of the specimen over time, ensuring consistent frame and channel alignment (Figure 3a and b). For this purpose, we routinely use Figure 3 a b c d e f g h Current Opinion in Cell Biology Example of computational tools that can improve live cell imaging movies. This figure illustrates the power and versatility of computational tools in enhancing the quality, resolution, and content of various types of microscopy images. (a) Time projection of drifting live images of nuclei, captured by a widefield microscope, corrected using Fast4DReg [10]. The color gradient, transitioning from purple (first frame) to white (last frame), denotes the temporal progression—scale bar: 50 mm. (b) A cancer cell in the mouse lung vasculature, in motion and imaged via an Airyscan confocal microscope, is displayed through a maximum-intensity projection. Channel misalignment has been corrected using Fast4DReg [10]—scale bar: 10 mm. (c) Noisy images of nuclei, acquired using a spinning disk confocal microscope, were denoised using a CARE 2D model ([17], as described in Ref. [9])—scale bar: 50 mm. (d) Breast cancer cells labeled with lifeact-RFP were imaged live using 3D SIM. Images were restored using a CARE 3D model ([17], as described in Ref. [19])—scale bar: 5 mm. (e) Cells labeled with Lifeact were imaged using a widefield microscope [45]. The increased image resolution was achieved using the DFCAN deep learning network (as described in Ref. [33])—scale bar: 5 mm. (f) This illustration showcases how a DL network like CAFI can enrich the temporal resolution of a live cell imaging dataset through smart interpolations [39]. (g) Brightfield microscopy was used to image migrating breast cancer cells, and the nuclei image was digitally generated from the brightfield image using a Pix2pix model [46]—scale bar: 100 mm. (h) Breast cancer cells labeled with lifeact-RFP were imaged using a spinning disk confocal. The nuclei image was digitally generated from the lifeact image using a Pix2pix model ([46], as described in Ref. [19])—scale bar: 100 mm. 4 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- Fast4DReg [10] and Correct 3D drift [11]. Bleach correction addresses the signal loss occurring when specimens are exposed to too much light over prolonged periods or to uneven illumination. To correct our movies, we routinely use the Bleach correction ImageJ plugin (also available in Napari) [12,13]. Denoising and restoring live imaging data Fluorescent live imaging necessitates low concentra- tions of fluorescent labels and minimal laser power to prevent the disruption of biological processes and ensure the sample’s health. This often leads to the acquisition of noisy images. DL has been successfully applied to remove this noise while preserving the useful signal, thereby facilitating the extraction of meaningful biological information from the imaging data (i.e., [14,15]). DL-based denoising algorithms can be broadly categorized into two groups based on the required training datasets: (i) supervised and (ii) self-supervised (for deeper review, see Ref. [16]. Supervised DL algorithms, such as CARE [17] and 3D- RCAN [18], necessitate paired high- and low-quality images for training. Remarkably, these tools often extend beyond denoising tasks; they serve as compre- hensive image restoration algorithms capable of enhancing resolution and eliminating image artifacts, provided they are trained with an appropriate dataset (Figure 3c and d) [9]. These algorithms are changing how live imaging experiments are planned. Indeed, several strategies can be used to generate training datasets to denoise live imaging data, such as using fixed samples [19,20], artificially generating noisy data [21], or collecting live data before or during the timelapse acquisition [22]. Self-supervised algorithms such as Noise2Void [23] allow the training of denoising models directly from noisy images. These algorithms generally assume that the noise is independent of the pixel location (e.g., Gaussian or Poisson noise). If the assumption is met, these approaches can yield results comparable to su- pervised training without needing a paired training dataset. However, these algorithms may not always be suitable if the noise spatial-independence assumption is unmet (e.g., structured noise) [24]). Current state-of-the-art denoising methods integrate the knowledge about the image formation process into the learning process, which results in impressive results (i.e., [25,26]). User interested in denoising may consider Aydin witch provides a number of self- supervised, auto-tuned, and unsupervised image denoising algorithms [27]. Of note, it is generally advised to avoid quantifying absolute pixel intensities after DL-based denoising, as DL processing may intro- duce non-linear changes to the data. Improving the spatiotemporal resolution of live imaging data Live cell imaging aims to capture rich spatiotemporal information while minimizing sample damage. However, light microscopy’s w250 nm diffraction limit hinders detailed visualization. While various super-resolution strategies exist [28], they rarely suit extended live im- aging due to their high laser power requirements. Several analytical methods have demonstrated the ca- pacity to enhance live imaging resolution. Examples of recent non-DL algorithms that improve the resolution of live imaging data include eSRRF [29], SACD [30], and BF-SIM [31]. Super-resolution DL algorithms for live imaging fall into two categories. Algorithms such as SFSRM or DFCAN can super-pixelate an image and predict missing details (Figure 3e) [21,32,33]. Other DL algorithms can aid the post-processing required by most super-resolution microscopy techniques, including SIM [26,34,35] and single-molecule localization micro- scopy (SMLM) [36,37]. DL-based algorithms can also be used to recover missing temporal information via smart interpolation. For instance DBlink aid faster live SMLM by performing spatiotemporal interpolation [38]. As another example, CAFI can predict intermediary images post-acquisition, enhancing temporal resolution [39] (Figure 3f). Artificial labeling Artificial labeling is a computational technique that utilizes DL to predict staining based on other micro- scopy images [40,41]. For instance, artificial labeling can predict a nucleus staining from brightfield or F-actin images (Figure 3g and h). The predicted staining can assist downstream analysis, such as segmentation and tracking [19,20,42]. Artificial labeling is especially beneficial for live imaging as it allows for staining re- covery without explicit imaging, thereby improving acquisition speed, multiplexing, and reducing photo- toxicity. When combined with live brightfield, phase, or digital holographic imaging, artificial labeling offers a non-invasive, non-destructive approach for comprehen- sive cellular structure visualization [43,44]. Segmentation and tracking One key strategy to extract biological information from videos is tracking, which involves following objects of interest over time to quantify their behaviors. Tracking is typically a two-step process: object detection at each time point and tracking formation via detection linking (Figure 4a). Tracking accuracy often relies on successful object recognition, where segmentation methods employing machine learning and DL algorithms have demonstrated proficiency for various bioimages (for review, see Ref. [47]). Because of this, DL segmentation tools are now integrated into tracking platforms, such as TrackMate [48], Cell-ACDC [49], DeepTree [50], and Live-cell imaging in the deep learning era Pylvänäinen et al. 5 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- Figure 4 Current Opinion in Cell Biology a b c d 6 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- ELEPHANT [51]. These tools cater to different needs based on the nature of the data, required features, and the user’s preferred computational platform. For instance, ELEPHANT aims at tracking objects within large 4D movies. TrackMate, integrated in Fiji [52], is feature-packed and allows, for instance, to follow morphological and intensity changes of the tracked object over time. DL algorithms can also be used for the object linking step [53]. Despite the growing prevalence of DL-based strategies in tracking, cleverly crafted classical algorithms remain state-of-the-art for certain uses, such as the segmentation and tracking of mito- chondria [54]. Finally, an integral aspect of automated tracking is verifying the performance of the chosen method for a specific dataset, for which several metrics have been developed to score tracking quality [55e57]. These metrics can also guide the optimization of tracking parameters, ensuring the most accurate and useful data extraction from live imaging data [48]. Yet tracking is not always necessary. Segmentation alone can detect events within video data and yield valuable biological insights (i.e., [58]). While most DL-based segmentation methods for video microscopy are super- vised, which requires the creation of a manually labeled training dataset, self-supervised methods also exist. One notable example is Time Arrow Prediction [59], designed to detect time-asymmetric biological pro- cesses such as cell division from microscopy videos. In some conditions, tracking is insufficient. For instance, when studying changes within an object over time. One solution is to use an analysis window strategy, which divides the object into distinct areas for individual assessment [60]. However, this method faces challenges when the tracked objects undergo large deformations during the video (such as shape changes during cell migration) [61]. In this case, nonlinear image registra- tion can be used to align the object outline and interior in each frame, facilitating the spatiotemporal analysis of processes within the object [61]. Reducing the complexity of live imaging data via projections Quantitative analysis of multi-dimensional live imaging datasets can be complex. It can be greatly simplified by reducing the video dimensions using projections (such as time projection, Figure 4b) or creating spatiotemporal maps (such as kymographs, Figure 4b), which capture dynamic changes in single images. DL algorithms such as the 4SM model and KymoButler can automate creating and analyzing spatiotemporal maps in large datasets [62,63]. Projections can also be applied to complex datasets, such as light-sheet movies of cancer cells migrating in 3D. For instance, u-Unwrap3D can remap arbitrarily complex 3D cell surfaces into equiva- lent lower-dimensional representations. This surface- guided projection strategy allows the tracking of segmented surface motifs and associated fluorescent signals in 2D [64]. Time series analysis Once numbers are extracted from the video, additional steps often come into play for meaningful analysis and comparison, especially when a simple time series average is insufficient. For instance, time series normalization becomes crucial when following intensity changes over time in single cells. As another example, Granger-causal inference can be used to compare time series and infer causeeeffect relations between fluctu- ating protein intensity recordings [65]. When dealing with high-dimensionality data, clustering, principal components, and t-SNE analyses can significantly assist in the unbiased discovery of rare phenotypes (Figure 4c, [66e68]). Recent advancements include tools like CellPhe and Traject3D, designed to automate cell phenotyping across different imaging modalities [66,67]. In this context, DL algorithms can potentially enhance time series analysis even further [69]. When analyzing time series, online tools like PlotTwist [70] offer a user-friendly platform for straightforward needs. Multiple Python and R toolboxes such as sktime [71] are available for more complex analyses. These packages provide a wide range of methods for time series analysis. Regardless of the chosen approach, quality control is fundamental for time series analysis to ensure result reproducibility, which often relies on standardized procedures combined with batch correction [72]. Self-driving microscopy By combining on-the-fly image analysis with automated microscope control, self-driving microscopy software are Extracting temporal information from live imaging data. (a) Widefield fluorescence microscopy was used to image breast cancer cells expressing a GFP-tagged ERK-reporter (dataset described in Ref. [48]). The cytoplasm was segmented using a custom CellPose model [83], and cell movements were tracked with CellPose in TrackMate [48]. Changes in cell area over time were plotted using PlotTwist [70]—scale bar: 50 mm. (b) Lifeact-RFP-expressing cancer cells were recorded using a spinning disk confocal microscope. Dynamic changes are visualized in a single image using a time projection (purple to white) and a kymograph along a defined line—scale bar: 50 mm. (c) Cancer cell spheroids were imaged at low resolution using an incubator microscope. After segmentation and tracking, the phenotypic state classi- fication of the spheroids, as well as the visualization of the phenotypic space, was enabled by a data-driven time-series analysis focusing on cell shape, size, and movement (figure panel adapted from Ref. [66], only the font size and image sizes were changed in respect to the original figure). (d) Self-driving microscopy provides real-time feedback during image acquisition. Analyzed on the fly, the acquired data enables adjusting microscope settings and acquisition parameters, optimizing data collection. Live-cell imaging in the deep learning era Pylvänäinen et al. 7 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- revolutionizing how we perform live cell imaging ex- periments [73e76]. For instance, it allows for effortless transitions from low to high-magnification imaging during time-lapse acquisition [74] or modifying imaging rates on the fly, capturing biological events in remark- able detail [76]. The technology also enables modality switching, such as brightfield to fluorescence or wide- field to SR [73,74], presenting unmatched adaptability in live cell imaging. Another significant feature is the ability to control optogenetic stimulation autonomously [75]. The focus on user-defined pertinent events miti- gates phototoxicity and photobleaching, safeguarding sample health while optimizing efficiency by reducing unnecessary imaging. Furthermore, it can capture elusive, transient events that could be overlooked by traditional methods, thereby heightening the efficacy of live cell imaging experiments (Figure 4d). As a burgeoning field, self-driving microscopy holds considerable potential, particularly when coupled with DL’s capacity to leverage imaging data and our ability to execute complex computations in real time. The cornerstone of self-driving microscopes lies in open- source microscopy control software, which enables adaptive control schemes and event detection. Pioneering platforms such as Micro-Manager [77e79], Pycro-Manager [80] or AutoScanJ [81] are at the fore- front, driving these technological advancements and redefining the landscape of live imaging. Choosing an image analysis tool In the rapidly evolving landscape of image analysis tools [82], the choice of approach is strongly influenced by the specific sample being imaged. With a myriad of DL networks, models, and software available, there isn’t a universally optimal tool; instead, the selection depends on the sample imaged, the type of data collected, and the data that needs to be extracted from the video. Tool selection is also influenced by the user’s familiarity with different interfaces and proficiency in coding languages. Training DL models generally demands significant computational resources and often necessitates coding and computational proficiency. Several tools, such as ZeroCostDL4Mic, Cellpose 2.0, or DeepCell Kiosk, have made DL training and deployment for bioimage analysis more accessible [19,83e85]. In addition, ongoing initiatives facilitate sharing and re-using trained DL models by creating model zoos [83,84,86e89]. While DL approaches generally outperform traditional image processing techniques, it is essential to remember that the latter may be more appropriate or faster to implement. When using DL, users should craft their training dataset carefully and, in particular, ensure that their sample heterogeneity is well represented in the training data- set. We also recommend that users take the time to carefully and quantitatively validate their image analysis pipeline. Additionally, DL models should also be care- fully validated, and their use (including the training datasets) should be reported appropriately in publica- tions (see Refs. [9,90]). Future perspectives The last few years have seen an explosion in image analysis software, greatly empowering live cell imaging acquisition and analysis. However, tools specifically designed for video analysis, which capitalize on the temporal coherence of live microscopy datasets, have been comparatively scarce. We expect the future will bring software that fully harnesses the dynamic dimen- sion of microscopy videos. We are especially excited about ongoing developments, including the rise of large segmentation models, such as Segment-Anything [91] and Track-Anything [92], which will facilitate the analysis of microscopy videos. In addition, large language models, such as ChatGPT or Github Copilot, reshape how we develop image analysis pipelines. An exciting development in this context is using natural language to control image analysis software directly, as demonstrated by the Napari plugin Omega [13,93]. These technological strides hint at a not-too- distant future where integrating these tools with self- driving microscopy software will create more interac- tive and user-friendly self-driving microscopes. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request. Acknowledgments This study was supported by the Academy of Finland (338537 to G.J.), the Sigrid Juselius Foundation (to G.J.), the Cancer Society of Finland (Syo¨pa¨ja¨rjesto¨t; to G.J.), and the Solutions for Health strategic funding to A˚bo Akademi University (to G.J.). E.G.M. and R.H. are supported by the Gulbenkian Foundation (Fundac¸a˜o Calouste Gulbenkian), the European Molecular Biology Organization Installation Grant (EMBO-2020-IG4734 granted to R.H.) and Postdoctoral Fellowship (EMBO ALTF 174-2022 granted to E.G.M.), and the European Commission through the Horizon Europe program (AI4LIFE project, grant agreement 101057970-AI4LIFE to R.H.). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. R.H. also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 101001332 to R.H.), the Chan Zuck- erberg Initiative Visual Proteomics Grant (vpi-0000000044). This research was supported by InFLAMES Flagship Programme of the Academy of Finland (decision number: 337531). 8 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- References Papers of particular interest, published within the period of review, have been highlighted as: * of special interest * * of outstanding interest 1. Icha J, Weber M, Waters JC, Norden C: Phototoxicity in live fluorescence microscopy, and how to avoid it. Bioessays 2017, 39, 1700003. 2. Schmidt R, Weihs T, Wurm CA, Jansen I, Rehman J, Sahl SJ, Hell SW: MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence mi- croscope. Nat Commun 2021, 12:1478. 3. Castello M, Tortarolo G, Buttafava M, Deguchi T, Villa F, Koho S, Pesce L, Oneto M, Pelicci S, Lanzanó L, et al.: A robust and versatile platform for image scanning microscopy enabling super-resolution FLIM. Nat Methods 2019, 16:175–178. 4. Zhao Y, Zhang M, Zhang W, Zhou Y, Chen L, Liu Q, Wang P, Chen R, Duan X, Chen F, et al.: Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales. Nat Methods 2022, 19:359–369. 5. Daetwyler S, Fiolka RP: Light-sheets and smart microscopy, an exciting future is dawning. Commun Biol 2023, 6:1–11. 6. Wu Y, Han X, Su Y, Glidewell M, Daniels JS, Liu J, Sengupta T, Rey-Suarez I, Fischer R, Patel A, et al.: Multiview confocal super-resolution microscopy. Nature 2021, 600:279–284. 7. Belthangady C, Royer LA: Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat Methods 2019, 16:1215–1225. 8. Melanthota SK, Gopal D, Chakrabarti S, Kashyap AA, Radhakrishnan R, Mazumder N: Deep learning-based image processing in optical microscopy. Biophys Rev 2022, 14: 463–481. 9 * . Laine RF, Arganda-Carreras I, Henriques R, Jacquemet G: Avoiding a replication crisis in deep-learning-based bioimage analysis. Nat Methods 2021, 18:1136–1144. This article provides an overview of the application of deep learning in bioimaging data analysis, emphasizing the importance of validating results and choosing the right tool. The authors also share best prac- tices for implementing and reporting on using deep learning image analysis tools in publications, ensuring reproducibility and appropriate use of this transformative technology. The article also addresses concerns about the transparency and generalizability of deep learning, offering solutions to alleviate these issues. 10. Pylvänäinen JW, Laine RF, Saraiva BMS, Ghimire S, Follain G, Henriques R, Jacquemet G: Fast4DReg – fast registration of 4D microscopy datasets. J Cell Sci 2023, 136:jcs260728. 11. Parslow A, Cardona A, Bryson-Richardson RJ: Sample drift correction following 4D confocal time-lapse imaging. J Vis Exp 2014, https://doi.org/10.3791/51086. 12. Miura K: Bleach correction ImageJ plugin for compensating the photobleaching of time-lapse sequences. 2020, https://doi.org/ 10.12688/f1000research.27171.1. 13. Sofroniew N, Lambert T, Evans K, Nunez-Iglesias J, Bokota G, Winston P, Peña-Castellanos G, Yamauchi K, Bussonnier M, Doncila Pop D, et al.: napari: a multi-dimensional image viewer for Python. 2022, https://doi.org/10.5281/zenodo.7276432. 14. Xu YKT, Graves AR, Coste GI, Huganir RL, Bergles DE, Charles AS, Sulam J: Cross-modality supervised image restoration enables nanoscale tracking of synaptic plasticity in living mice. Nat Methods 2023, https://doi.org/10.1038/ s41592-023-01871-6. 15 * . Li X, Zhang G, Wu J, Zhang Y, Zhao Z, Lin X, Qiao H, Xie H, Wang H, Fang L, et al.: Reinforcing neuron extraction and spike inference in calcium imaging using deep self- supervised denoising. Nat Methods 2021, 18:1395–1400. This article presents DeepCAD, a self-supervised DL approach for denoising live imaging data. The authors demonstrate that DeepCAD can significantly improve in vivo calcium imaging and facilitates neuron extraction and spike inference. The PyTorch implementation of Deep- CAD, the Fiji plugin, and pretrained models are publicly available. This article is particularly relevant for researchers interested in using DL for denoising microscopy videos. 16. Laine RF, Jacquemet G, Krull A: Imaging in focus: an intro- duction to denoising bioimages in the era of deep learning. Int J Biochem Cell Biol 2021, 140, 106077. 17. Weigert M, Schmidt U, Boothe T, Müller A, Dibrov A, Jain A, Wilhelm B, Schmidt D, Broaddus C, Culley S, et al.: Content- aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods 2018, 15:1090–1097. 18 * . Chen J, Sasaki H, Lai H, Su Y, Liu J, Wu Y, Zhovmer A, Combs CA, Rey-Suarez I, Chang H-Y, et al.: Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat Methods 2021, 18:678–687. This article presents the development of three-dimensional residual channel attention networks (3D RCAN) designed to denoise and restore volumetric fluorescence microscopy data. This article is a must-read for researchers interested in using deep learning for live cell imaging. It introduces a tool that can significantly enhance the quality of microscopy images, making it easier to analyze and interpret data from live cell imaging studies. The authors’ provision of the code and datasets also means that readers can directly apply the tool in their research. 19 * * . von Chamier L, Laine RF, Jukkala J, Spahn C, Krentzel D, Nehme E, Lerche M, Hernández-Pérez S, Mattila PK, Karinou E: Democratising deep learning for microscopy with Zero- CostDL4Mic. Nat Commun 2021, 12:2276. This article presents ZeroCostDL4Mic, a cloud-based platform that makes deep learning more accessible for microscopy image analysis. The platform uses Jupyter Notebooks and Google Colaboratory, requiring only a web browser and a Google account. It offers a range of deep learning tasks, including image segmentation, object detection, image denoising, restoration, super-resolution microscopy, and image- to-image translations. The platform is designed to be user-friendly, requiring no prior coding knowledge, and includes comprehensive performance evaluation for trained models. 20. Spahn C, Gómez-de-Mariscal E, Laine RF, Pereira PM, von Chamier L, Conduit M, Pinho MG, Jacquemet G, Holden S, Heilemann M: DeepBacs for multi-task bacterial image anal- ysis using open-source deep learning approaches. Commun Biol 2022, 5:688. 21 * . Fang L, Monroe F, Novak SW, Kirk L, Schiavon CR, Yu SB, Zhang T, Wu M, Kastner K, Latif AA, et al.: Deep learning-based point-scanning super-resolution imaging. Nat Methods 2021, 18:406–416. This article presents the Point-Scanning Super-Resolution (PSSR) model, a deep-learning approach to enhance the resolution of point- scanning imaging techniques. PSSR super pixelates images and predicts the missing details, enabling high-resolution time-lapse im- aging of subcellular events. A key aspect of this work is the develop- ment of a ’crappifier’, a tool that computationally degrades high signal- to-noise ratio (SNR) ground truth images to simulate low SNR condi- tions. This approach greatly simplifies the generation of a paired su- pervised dataset by artificially generating high-quality training data. The article also includes access to PSSR source code and docu- mentation. This work significantly contributes to live cell imaging, providing a practical tool for researchers to enhance their imaging studies using DL. 22 * . Wagner N, Beuttenmueller F, Norlin N, Gierten J, Boffi JC, Wittbrodt J, Weigert M, Hufnagel L, Prevedel R, Kreshuk A: Deep learning-enhanced light-field imaging with continuous vali- dation. Nat Methods 2021, 18:557–563. This article describes a DL-based algorithm that significantly improves the reconstruction of light-field microscopy data. A key aspect of the work is that the algorithm is continuously validated on-the-fly using concurrently acquired light-sheet microscopy data, which provides ground truth data for training, validation, and refinement of the algo- rithm. The datasets and neural network code used in the study are openly available, making this a valuable resource for researchers interested in using deep learning to study live cell imaging. 23. Krull A, Buchholz T-O, Jug F: Noise2Void - learning denoising from single noisy images. In 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR); 2019: 2124–2132. Live-cell imaging in the deep learning era Pylvänäinen et al. 9 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- 24. Broaddus C, Krull A, Weigert M, Schmidt U, Myers G: Removing structured noise with self-supervised blind-spot networks. In 2020 IEEE 17th international symposium on biomedical imaging (ISBI); 2020:159–163. 25 * . Li Y, Su Y, Guo M, Han X, Liu J, Vishwasrao HD, Li X, Christensen R, Sengupta T, Moyle MW, et al.: Incorporating the image formation process into deep learning improves network performance. Nat Methods 2022, 19:1427–1437. The article introduces the Richardson–Lucy Network (RLN), a novel approach that combines traditional Richardson–Lucy with DL for improved deconvolution. RLN is particularly beneficial for 3D and 4D data, offering fewer artifacts, better generalizability, and requiring less computing time than alternative methods. RLN addresses the chal- lenges of parameter tuning and generalizability associated with deep learning methods. It achieves this by combining the interpretability of traditional model-based algorithms with the powerful learning ability of DL networks. 26 * * . Qiao C, Li D, Liu Y, Zhang S, Liu K, Liu C, Guo Y, Jiang T, Fang C, Li N, et al.: Rationalized deep learning super- resolution microscopy for sustained live imaging of rapid subcellular processes. Nat Biotechnol 2023, 41:367–377. This article presents a novel approach to improving image recon- struction in microscopy through DL. The authors have developed a rationalized deep learning (rDL) model for structured illumination mi- croscopy and lattice light sheet microscopy that incorporates prior knowledge of illumination patterns, significantly enhancing the quality of the reconstructed images. For researchers interested in using deep learning to study live cell imaging, this article provides valuable insights into the potential of DL models in enhancing image reconstruction. 27. Solak AC, Royer LA, Janhangeer A-R, Kobayashi H: royerlab/ aydin: v0.115. 2022, https://doi.org/10.5281/zenodo.7222198. 28. Jacquemet G, Carisey AF, Hamidi H, Henriques R, Leterrier C: The cell biologist’s guide to super-resolution microscopy. J Cell Sci 2020:133. 29 * . Laine RF, Heil HS, Coelho S, Nixon-Abell J, Jimenez A, Galgani T, Stubb A, Follain G, Webster S, Goyette J: High-fi- delity 3D live-cell nanoscopy through data-driven enhanced super-resolution radial fluctuation. bioRxiv 2022.04.07.487490 https://doi.org/10.1101/2022.04.07.487490. The article presents eSRRF (enhanced-SRRF), a software that opti- mizes the processing of fluctuation-based super-resolution microscopy datasets. To enhance the quality of the reconstructions, eSRRF em- ploys automated, data-driven parameter optimizations. The authors showcase the superior fidelity of images reconstructed with eSRRF, underscoring its versatility and user-friendly application across various microscopy techniques and biological systems. Furthermore, the au- thors have expanded eSRRF’s capabilities to 3D super-resolution mi- croscopy by integrating it with multi-focus microscopy, thereby achieving volumetric super-resolution imaging of live cells at an impressive acquisition speed of approximately one volume per second. 30. Zhao W, Zhao S, Han Z, Ding X, Hu G, Qu L, Huang Y, Wang X, Mao H, Jiu Y, et al.: Enhanced detection of fluorescence fluctuations for high-throughput super-resolution imaging. Nat Photonics 2023, https://doi.org/10.1038/s41566-023-01234-9. 31. Mo Y, Wang K, Li L, Xing S, Ye S, Wen J, Duan X, Luo Z, Gou W, Chen T, et al.: Quantitative structured illumination micro- scopy via a physical model-based background filtering al- gorithm reveals actin dynamics. Nat Commun 2023, 14:3089. 32 * . Chen R, Tang X, Zhao Y, Shen Z, Zhang M, Shen Y, Li T, Chung CHY, Zhang L, Wang J, et al.: Single-frame deep- learning super-resolution microscopy for intracellular dy- namics imaging. Nat Commun 2023, 14:2854. The article introduces a DL-based method to enhance the resolution of microscopy images. The authors present a super-resolution network capable of transforming a single diffraction-limited image into a super- resolution image, achieving up to a tenfold improvement in resolution. This innovative approach facilitates high-precision live-cell imaging, achieving spatiotemporal resolutions of 30 nm and 10 ms. This advancement allows for extended observation of intricate subcellular dynamics, including the interactions between mitochondria and the endoplasmic reticulum, vesicle transport along microtubules, and en- dosome fusion and fission processes. This work underscores the transformative potential of deep learning in live cell imaging. 33. Qiao C, Li D, Guo Y, Liu C, Jiang T, Dai Q, Li D: Evaluation and development of deep neural networks for image super- resolution in optical microscopy. Nat Methods 2021, 18: 194–202. 34. Xypakis E, Gosti G, Giordani T, Santagati R, Ruocco G, Leonetti M: Deep learning for blind structured illumination microscopy. Sci Rep 2022, 12:8623. 35. Jin L, Liu B, Zhao F, Hahn S, Dong B, Song R, Elston TC, Xu Y, Hahn KM: Deep learning enables structured illumination mi- croscopy with low light levels and enhanced speed. Nat Commun 2020, 11:1934. 36. Fu S, Shi W, Luo T, He Y, Zhou L, Yang J, Yang Z, Liu J, Liu X, Guo Z, et al.: Field-dependent deep learning enables high- throughput whole-cell 3D super-resolution imaging. Nat Methods 2023, 20:459–468. 37. Speiser A, Müller L-R, Hoess P, Matti U, Obara CJ, Legant WR, Kreshuk A, Macke JH, Ries J, Turaga SC: Deep learning en- ables fast and dense single-molecule localization with high accuracy. Nat Methods 2021, 18:1082–1090. 38. Saguy A, Alalouf O, Opatovski N, Jang S, Heilemann M, Shechtman Y: DBlink: Dynamic localization microscopy in super spatiotemporal resolution via deep learning. 2022, https://doi.org/ 10.1101/2022.07.01.498428. 39 * . Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Chubb JR, Manor U, Vilar R, Laine RF: Content-aware frame interpolation (CAFI). Deep Learning-based temporal super-resolution for fast bioimaging 2021, https://doi.org/10.1101/2021.11.02.466664. The article introduces the CAFI framework composed of two DL models Zooming SlowMo (ZS) and Depth-Aware Video Frame Inter- polation (DAIN). CAFI can accurately predict images between image pairs, therefore, enhancing the temporal resolution or microscopy video. The authors demonstrate that CAFI predictions can understand the motion context of biological structures, outperforming standard interpolation methods on six different datasets. The authors make both DAIN and ZS, as well as the training and testing data, available for use by the wider community via the ZeroCostDL4Mic platform. 40. Christiansen EM, Yang SJ, Ando DM, Javaherian A, Skibinski G, Lipnick S, Mount E, O’Neil A, Shah K, Lee AK, et al.: In silico labeling: predicting fluorescent labels in unlabeled images. Cell 2018, 173:792–803.e19. 41. Ounkomol C, Seshamani S, Maleckar MM, Collman F, Johnson GR: Label-free prediction of three-dimensional fluo- rescence images from transmitted-light microscopy. Nat Methods 2018, 15:917–920. 42. Gu S, Lee RM, Benson Z, Ling C, Vitolo MI, Martin SS, Chalfoun J, Losert W: Label-free cell tracking enables collec- tive motion phenotyping in epithelial monolayers. iScience 2022, 25, 104678. 43 * * . Jo Y, Cho H, Park WS, Kim G, Ryu D, Kim YS, Lee M, Park S, Lee MJ, Joo H, et al.: Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning. Nat Cell Biol 2021, 23:1329–1337. This article presents a DL approach, RI2FL (Refractive Index to Fluo- rescence), that predicts fluorescence labels based on label-free refractive index measurements (digital holographic imaging). This approach can potentially be used across various cell types without retraining, making it a versatile tool for researchers interested in live cell imaging. The authors have made the RI2FL datasets available on GitHub, along with the source code, example datasets, and step-by- step interactive tutorials. 44. Chen X, Kandel ME, He S, Hu C, Lee YJ, Sullivan K, Tracy G, Chung HJ, Kong HJ, Anastasio M, et al.: Artificial confocal mi- croscopy for deep label-free imaging. Nat Photonics 2023, 17: 250–258. 45. Qiao C, Li D: BioSR: a biological image dataset for super- resolution microscopy. 2022, https://doi.org/10.6084/ m9.figshare.13264793.v8. 46. Isola P, Zhu J-Y, Zhou T, Efros AA: Image-to-Image translation with conditional adversarial networks. ArXiv161107004 Cs 2018. 47. Lucas AM, Ryder PV, Li B, Cimini BA, Eliceiri KW, Carpenter AE: Open-source deep-learning software for bioimage segmen- tation. Mol Biol Cell 2021, 32:823–829. 10 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- 48 * * . Ershov D, Phan M-S, Pylvänäinen JW, Rigaud SU, Le Blanc L, Charles-Orszag A, Conway JRW, Laine RF, Roy NH, Bonazzi D, et al.: TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat Methods 2022, 19: 829–832. This article presents TrackMate 7, a tool that combines machine learning and DL-based image segmentation with multiple object- tracking algorithms. Notably, TrackMate 7 includes a batcher for processing multiple datasets and a helper module that assists users in identifying the best possible tracking parameters using ground truth data. TrackMate 7 is open-source and can be accessed directly in the Fiji software. 49 * . Padovani F, Mairhörmann B, Falter-Braun P, Lengefeld J, Schmoller KM: Segmentation, tracking and cell cycle analysis of live-cell imaging data with Cell-ACDC. BMC Biol 2022, 20: 174. This article introduces Cell-ACDC, an open-source, user-friendly GUI- based framework for segmentation, tracking, and cell cycle annota- tions. Cell-ACDC incorporates state-of-the-art deep learning models for single-cell segmentation of mammalian and yeast cells alongside cell tracking methods. Notably, it now includes the state-of-the-art tracking algorithm TAPIR, demonstrating its rapid development and adaptability. The tool also offers an intuitive, semi-automated workflow for cell cycle annotation of single cells. The open-source and modularized nature of Cell-ACDC allows for easy integration of new deep learning-based and traditional methods for cell segmentation, tracking, and downstream image analysis 50. Ulicna K, Vallardi G, Charras G, Lowe AR: Automated deep lineage tree analysis using a bayesian single cell tracking approach. Front Comput Sci 2021:3. 51 * * . Sugawara K, Çevrim Ç, Averof M: Tracking cell lineages in 3D by incremental deep learning. Elife 2022, 11, e69380. This article presents ELEPHANT, an interactive platform for 3D cell tracking that uses an incremental approach to DL. ELEPHANT in- tegrates cell track annotation, deep learning, prediction, and proof- reading, allowing users to start from a few annotated nuclei and improve tracking performance rapidly through successive prediction–validation cycles. ELEPHANT has been tested against state-of-the-art methods and has proven to yield accurate, fully- validated cell lineages with a modest investment in time and effort. ELEPHANT provides a user-friendly interface that requires only a few annotations to start, making it accessible even to non-experts. The incremental learning approach allows for rapid improvements in tracking performance, which can be particularly beneficial in studies involving large-scale tracking or long-term imaging. 52. Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, et al.: Fiji: an open-source platform for biological-image analysis. Nat Methods 2012, 9:676–682. 53 * . Pineda J, Midtvedt B, Bachimanchi H, Noé S, Midtvedt D, Volpe G, Manzo C: Geometric deep learning reveals the spatiotemporal features of microscopic motion. Nat Mach Intell 2023, 5:71–82. This article presents a geometric DL framework for automated trajec- tory linking and dynamical property estimation in biological experi- ments. The framework, named MAGIK (Motion Analysis through GNN Inductive Knowledge), can handle complex biological scenarios, such as high object density, fusion or splitting events, random and hetero- geneous motion, and shape-changing objects. It uses graph neural networks (GNNs) to model the system’s motion and interactions, providing accurate estimation of dynamical properties from time-lapse microscopy. For researchers interested in using deep learning to study live cell imaging, this article offers a novel perspective on tackling tracking and motion characterization using geometric DL. 54. Lefebvre AEYT, Ma D, Kessenbrock K, Lawson DA, Digman MA: Automated segmentation and tracking of mitochondria in live-cell time-lapse images. Nat Methods 2021, 18:1091–1102. 55. Ma�ska M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Ne�casová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, et al.: The cell tracking challenge: 10 years of objective benchmarking. Nat Methods 2023, https://doi.org/ 10.1038/s41592-023-01879-y. 56. Ulman V, Ma�ska M, Magnusson KEG, Ronneberger O, Haubold C, Harder N, Matula P, Matula P, Svoboda D, Radojevic M, et al.: An objective comparison of cell-tracking algorithms. Nat Methods 2017, 14:1141–1152. 57. Chenouard N, Smal I, de Chaumont F, Ma�ska M, Sbalzarini IF, Gong Y, Cardinale J, Carthel C, Coraluppi S, Winter M, et al.: Objective comparison of particle tracking methods. Nat Methods 2014, 11:281–289. 58. Villars A, Letort G, Valon L, Levayer R: DeXtrusion: automatic recognition of epithelial cell extrusion through machine learning in vivo. Development 2023, 150. dev201747. 59 * . Gallusser B, Stieber M, Weigert M: Self-supervised dense rep- resentation learning for live-cell microscopy with time arrow pre- diction. 2023, https://doi.org/10.48550/arXiv.2305.05511. This article presents a new DL self-supervised method for analyzing live-cell microscopy videos. The method, called Time Arrow Prediction (TAP), predicts the correct order of time-flipped image regions via a single-image feature extractor and a subsequent time arrow prediction head. The resulting dense representations capture inherently time- asymmetric biological processes such as cell divisions on a pixel level. The authors demonstrate the utility of these representations on several live-cell microscopy datasets for the detection and segmenta- tion of dividing cells, as well as for cell state classification. The method outperforms supervised methods, particularly when only limited ground truth annotations are available, which is commonly the case in practice. 60. Machacek M, Hodgson L, Welch C, Elliott H, Pertz O, Nalbant P, Abell A, Johnson GL, Hahn KM, Danuser G: Coordination of Rho GTPase activities during cell protrusion. Nature 2009, 461:99–103. 61 * . Jiang X, Isogai T, Chi J, Danuser G: Fine-grained, nonlinear registration of live cell movies reveals spatiotemporal orga- nization of diffuse molecular processes. PLoS Comput Biol 2022, 18, e1009667. This article presents a novel method for analyzing local subcellular processes across the entire cell. The authors have developed a tech- nique that aligns microscopy time-lapse sequences for every frame, allowing for a detailed spatiotemporal analysis of molecular processes in the whole cell, even if it changes its shape during the movie. The authors also provide a Matlab code for implementing the proposed registration method. This method particularly benefits those interested in analyzing live cell imaging data. It provides a framework for extracting information to explore functional interactions between cell morphodynamics, protein distributions, and signaling in cells under- going shape changes. 62. Jakobs MA, Dimitracopoulos A, Franze K: KymoButler, a deep learning software for automated kymograph analysis. Elife 2019, 8, e42288. 63. Kamran SA, Hossain KF, Moghnieh H, Riar S, Bartlett A, Tavakkoli A, Sanders KM, Baker SA: New open-source soft- ware for subcellular segmentation and analysis of spatio- temporal fluorescence signals using deep learning. iScience 2022, 25. 64 * * . Zhou FY, Weems A, Gihana GM, Chen B, Chang B-J, Driscoll M, Danuser G: Surface-guided computing to analyze subcellular morphology and membrane-associated signals in 3D. 2023, https://doi.org/10.1101/2023.04.12.536640. This article introduces a new framework called u-Unwrap3D. This tool remaps complex 3D cell surfaces and membrane-associated signals into lower dimensional representations. Using these lower dimensions makes it much easier to analyze the data. Importantly, the mappings are bidirectional, allowing the application of image processing opera- tions in the data representation best suited for the task. As an example, the authors demonstrate that u-Unwrap3D is particularly useful for tracking segmented surface motifs in 2D, such as quantifying the recruitment of Septin polymers to blebbing events, quantifying actin enrichment in peripheral ruffles, and measuring the speed of ruffle movement along complex cell surfaces. This tool offers a unique approach to analyzing cell biological parameters on unrestricted 3D surface geometries, making it a valuable strategy for studying multi- dimensional video. 65 * . Noh J, Isogai T, Chi J, Bhatt K, Danuser G: Granger-causal inference of the lamellipodial actin regulator hierarchy by live cell imaging without perturbation. Cell Syst 2022, 13: 471–487.e8. This article presents a perturbation-free approach to understanding the regulatory network of lamellipodial actin structures. The authors intro- duce the use of Granger-causal inference applied to constitutive image fluctuations, which serve as indicators of actin regulator recruitment and activity. This method identifies distinct zones of actin regulator activation and their causal effects on filament assembly. The approach Live-cell imaging in the deep learning era Pylvänäinen et al. 11 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271 ----!@#$NewPage!@#$---- also helps to distinguish between actin-dependent and actin- independent regulator roles in controlling edge motion. The authors propose that edge motion is driven by assembling two independently operating actin filament systems. This strategy offers a unique, non- invasive way to analyze the complex cellular regulatory systems from live cell imaging data. Importantly, it allows studying individual protein functions without needing experimental intervention, overcoming a central challenge in cell biological inquiry. 66 * * . Freckmann EC, Sandilands E, Cumming E, Neilson M, Román- Fernández A, Nikolatou K, Nacke M, Lannagan TRM, Hedley A, Strachan D, et al.: Traject3d allows label-free identification of distinct co-occurring phenotypes within 3D culture by live imaging. Nat Commun 2022, 13:5317. The article presents Traject3d, an innovative image analysis pipeline that facilitates the detection of co-existing heterogeneous phenotypes within 3D cultures. Traject3d can identify distinct subtypes within these heterogeneous populations using label-free, multi-day time-lapse im- aging. Unlike traditional methods that rely on static snapshots, Traject3d leverages live imaging to identify alternative phenotypes. This approach enables the unbiased detection of rare phenotypes that may arise through unexpected or low-probability state transitions. For re- searchers exploring the regulation of cellular heterogeneity and its contribution to biological systems, Traject3d offers a valuable tool, enhancing the depth and accuracy of live cell imaging analysis. 67 * . Wiggins L, Lord A, Murphy KL, Lacy SE, O’Toole PJ, Brackenbury WJ, Wilson J: The CellPhe toolkit for cell pheno- typing using time-lapse imaging and pattern recognition. Nat Commun 2023, 14:1854. This article introduces a new toolkit called CellPhe, designed to char- acterize cellular phenotypes within time-lapse videos. CellPhe uses the output of segmentation and tracking software to provide an extensive list of features that describe changes in the cells’ appearance and behavior over time. These features include cell morphology, texture, and dynamics. CellPhe can also recognize and remove erroneous cell boundaries induced by inaccurate segmentation and tracking. The toolkit is particularly useful for live cell imaging as it allows for precise quantification of cell morphology and motility and monitors major cellular events such as mitosis and apoptosis. 68. Dao D, Fraser AN, Hung J, Ljosa V, Singh S, Carpenter AE: CellProfiler Analyst: interactive data exploration, analysis and classification of large biological image sets. Bioinfor- matics 2016, 32:3210–3212. 69. Schneider S, Lee JH, Mathis MW: Learnable latent embeddings for joint behavioural and neural analysis. Nature 2023, 617: 360–368. 70. Goedhart J, PlotTwist: A web app for plotting and annotating continuous data. PLoS Biol 2020, 18, e3000581. 71. Löning M, Bagnall A, Ganesh S, Kazakov V, Lines J, Király FJ: Sktime: a unified interface for machine learning with time series. 2019, https://doi.org/10.48550/arXiv.1909.07872. 72. Hu J, Serra-Picamal X, Bakker G-J, Van Troys M, Winograd- Katz S, Ege N, Gong X, Didan Y, Grosheva I, Polansky O, et al.: Multisite assessment of reproducibility in high-content cell migration imaging data. Mol Syst Biol 2023, 19, e11490. 73 * . Alvelid J, Damenti M, Sgattoni C, Testa I: Event-triggered STED imaging. Nat Methods 2022, 19:1268–1275. This article presents event-triggered STED (etSTED), a novel method that combines fast widefield imaging with high-resolution STED imag- ing. The method uses a real-time analysis pipeline to detect subcellular events such as biosensing, protein recruitment, or vesicle trafficking. Upon detection, it swiftly transitions from widefield to STED imaging within a 40 ms window, enabling rapid 2D and 3D STED nanoscopy acquisitions at the event site. A significant advantage of etSTED is its potential to reduce phototoxicity, as it only triggers high-resolution im- aging upon event detection, minimizing overall light exposure. This makes etSTED a valuable, cell-friendly tool for researchers studying cellular processes at high spatial resolution. 74 * . André O, Kumra Ahnlide J, Norlin N, Swaminathan V, Nordenfelt P: Data-driven microscopy allows for automated context-specific acquisition of high-fidelity image data. Cell Rep Methods 2023, 3, 100419. The article presents data-driven microscopy (DDM), an innovative image acquisition and analysis framework that enables dynamically switching between imaging magnifications. This unique feature allows DDM to image the entire cell population at a lower resolution for a broad overview and then automatically shift to a higher resolution when a phenotype of interest is detected. DDM’s real-time, population-wide object characterization facilitates this dynamic, context-specific imag- ing approach, enabling high-fidelity imaging of relevant phenotypes. The authors demonstrate DDM’s utility in high-content screening and live adaptive microscopy for cell migration and infection studies, capturing common and rare events with remarkable precision and resolution. By reducing human bias, increasing reproducibility, and contextualizing single-cell characteristics within the broader sample population, DDM enhances overall data fidelity. Thus, DDM is a valu- able tool for researchers seeking to capture high-resolution events in live cell imaging. 75 * . Fox ZR, Fletcher S, Fraisse A, Aditya C, Sosa-Carrillo S, Petit J, Gilles S, Bertaux F, Ruess J, Batt G: Enabling reactive micro- scopy with MicroMator. Nat Commun 2022, 13:2199. The article introduces MicroMator, a software specifically designed for reactive microscopy experiments. This tool enables real-time adapta- tions during live cell imaging experiments, such as tracking moving objects and adjusting the microscope accordingly. MicroMator’s standout feature is its ability to structure microscopy experiments using a primary image acquisition loop as the experiment’s backbone, supplemented by event creation functions for reactivity. These events, composed of Triggers and Effects, can perform various tasks, including modifying microscope configurations, illuminating specific patterns in the field of view, operating a microfluidic pump, initiating optimization routines, and even sending alerts via instant messaging platforms like Discord. 76 * . Mahecic D, Stepp WL, Zhang C, Griffié J, Weigert M, Manley S: Event-driven acquisition for content-enriched microscopy. Nat Methods 2022, 19:1262–1267. The article presents an innovative approach to super-resolution im- aging, leveraging DL to identify specific biological events. This DL- based recognition triggers a transition between slow and fast super- resolution imaging, enriching the data capture of significant events with enhanced spatiotemporal resolution. This method is particularly pertinent for those engaged in live cell imaging, as it substantially im- proves the quality and efficiency of imaging. It enables the extraction of more intricate and meaningful data from biological events. Conven- tional methods often necessitate continuous high-speed imaging, which can induce phototoxicity and rapid photobleaching. These fac- tors limit the duration of imaging and potentially harm the cells. How- ever, this novel approach mitigates these limitations by employing high- speed imaging only when required, thereby reducing phototoxicity and prolonging the imaging duration. 77. Edelstein AD, Tsuchida MA, Amodaj N, Pinkard H, Vale RD, Stuurman N: Advanced methods of microscope control using mManager software. J Biol Methods 2014, 1:e10. 78. Pinkard H, Stuurman N, Corbin K, Vale R, Krummel MF: Micro- Magellan: open-source, sample-adaptive, acquisition soft- ware for optical microscopy. Nat Methods 2016, 13:807–809. 79. Almada P, Pereira PM, Culley S, Caillol G, Boroni-Rueda F, Dix CL, Charras G, Baum B, Laine RF, Leterrier C, et al.: Auto- mating multimodal microscopy with NanoJ-Fluidics. Nat Commun 2019, 10:1223. 80. Pinkard H, Stuurman N, Ivanov IE, Anthony NM, Ouyang W, Li B, Yang B, Tsuchida MA, Chhun B, Zhang G, et al.: Pycro-Man- ager: open-source software for customized and reproducible microscope control. Nat Methods 2021, 18:226–228. 81. Tosi S, Lladó A, Bardia L, Rebollo E, Godo A, Stockinger P, Colombelli J: AutoScanJ: a suite of ImageJ scripts for intelli- gent microscopy. Front Bioinforma 2021:1. 82. Haase R, Fazeli E, Legland D, Doube M, Culley S, Belevich I, Jokitalo E, Schorb M, Klemm A, Tischer C: A Hitchhiker’s guide through the bio-image analysis software universe. FEBS Lett 2022, 596:2472–2485. 83 * * . Pachitariu M, Stringer C: Cellpose 2.0: how to train your own model. Nat Methods 2022, 19:1634–1641. This article introduces Cellpose 2.0, an upgrade to the original Cell- pose, a tool designed for cell segmentation in biological images. The new version improves upon the original by offering pretrained models that can be very easily fine-tuned using a human-in-the-loop training pipeline. Cellpose 2.0 allows the creation of custom DL segmentation models with very little new training data enabling the generation of state-of-the-art models in 1–2 h. For researchers interested in using DL for live cell imaging, this article provides valuable insights into how to train your own model using Cellpose 2.0. The human-in-the-loop approach discussed in the article can significantly reduce the time 12 Cell Dynamics 2023 Current Opinion in Cell Biology 2023, 85:102271 www.sciencedirect.com ----!@#$NewPage!@#$---- and effort required for model training, making it a highly recommended read for those venturing into this field. 84 * . Bannon D, Moen E, Schwartz M, Borba E, Kudo T, Greenwald N, Vijayakumar V, Chang B, Pao E, Osterman E, et al.: DeepCell Kiosk: scaling deep learning–enabled cellular image anal- ysis with Kubernetes. Nat Methods 2021, 18:43–45. The article presents DeepCell Kiosk, a platform that uses the power of cloud computing to quickly and affordably analyze large imaging datasets with DL. This tool, managed by Kubernetes, an open-source framework, can significantly enhance live cell imaging by scaling up data analysis. The authors demonstrated its efficiency and cost- effectiveness by identifying cell nuclei in megapixel images in a matter of hours for around $250. All data used in the study and the source code are available at deepcell.org and on the Van Valen lab’s GitHub page. 85. Belevich I, Jokitalo E: DeepMIB: user-friendly and open-source software for training of deep learning network for biological image segmentation. PLoS Comput Biol 2021, 17, e1008374. 86 * . Gómez-de-Mariscal E, García-López-de-Haro C, Ouyang W, Donati L, Lundberg E, Unser M, Muñoz-Barrutia A, Sage D, DeepImageJ: A user-friendly environment to run deep learning models in ImageJ. Nat Methods 2021, 18:1192–1195. This article presents DeepImageJ, a user-friendly ImageJ plugin that allows running DL models. DeepImageJ is designed to democratize the use of deep learning in microscopy, making it accessible to researchers without extensive computational expertise. DeepImageJ provides a practical tool for running DL models in a familiar environment, which can significantly enhance the analysis of microscopy images. 87 * . Ouyang W, Beuttenmueller F, Gómez-de-Mariscal E, Pape C, Burke T, Garcia-López-de-Haro C, Russell C, Moya-Sans L, de- la-Torre-Gutiérrez C, Schmidt D: Bioimage model zoo: a community-driven resource for accessible deep learning in bioimage analysis. bioRxiv 2022. This article introduces the BioImage Model Zoo, a community-driven, open resource where standardized trained DL models can be shared, tested, and downloaded for adaptation or direct deployment in various end-user tools such as ilastik, deepImageJ, QuPath, StarDist, ImJoy, ZeroCostDL4Mic, and CSBDeep. The authors envision the BioImage Model Zoo as a significant step towards making deep learning methods for microscopy imaging findable, accessible, inter- operable, and reusable (FAIR) across software tools and platforms. 88. Stirling DR, Swain-Bowden MJ, Lucas AM, Carpenter AE, Cimini BA, Goodman A: CellProfiler 4: improvements in speed, utility and usability. BMC Bioinf 2021, 22:433. 89. Berg S, Kutra D, Kroeger T, Straehle CN, Kausler BX, Haubold C, Schiegg M, Ales J, Beier T, Rudy M, et al.: ilastik: interactive machine learning for (bio)image analysis. Nat Methods 2019, 16:1226–1232. 90. Heil BJ, Hoffman MM, Markowetz F, Lee S-I, Greene CS, Hicks SC: Reproducibility standards for machine learning in the life sciences. Nat Methods 2021, 18:1132–1135. 91. Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg AC, Lo W-Y, et al.: Segment Anything 2023, https://doi.org/10.48550/arXiv.2304.02643. 92. Cheng Y, Li L, Xu Y, Li X, Yang Z, Wang W, Yang Y: Segment and Track Anything 2023, https://doi.org/10.48550/ arXiv.2305.06558. 93 * * . Royer LA: Omega – harnessing the power of Large Language models for bioimage analysis. 2023, https://doi.org/10.5281/ zenodo.8240289. Napari-chatgpt is a powerful tool that fuses the capabilities of OpenAI’s Large Language Model, ChatGPT, with napari, a robust multi- dimensional image viewer for Python. This innovative tool, named Omega, is engineered to execute image processing and analysis tasks conversationally, making it user-friendly. Omega is proficient in crafting image processing code, rectifying its coding errors, conducting sub- sequent analysis, and managing the napari viewer. Omega can guide non-experts through image analysis and processing. It also serves an educational purpose, as it can facilitate learning in image processing and analysis, thereby democratizing these complex fields. Omega employs a variety of tools, such as napari viewer control, napari query, napari widget maker, and cell segmentation tools. Additionally, it can conduct web searches and access Wikipedia, enhancing its versatility as an image analysis tool. Live-cell imaging in the deep learning era Pylvänäinen et al. 13 www.sciencedirect.com Current Opinion in Cell Biology 2023, 85:102271