The Rise of Data-Driven Microscopy powered by Machine Learning Leonor Morgado1, Estibaliz Gómez-de-Mariscal1, Hannah S. Heil1, , and Ricardo Henriques1,2, 1Instituto Gulbenkian de Ciência, Oeiras, Portugal 2MRC-Laboratory for Molecular Cell Biology. University College London, London, United Kingdom Optical microscopy is an indispensable tool in life sciences re- search, but conventional techniques require compromises be- tween imaging parameters like speed, resolution, field-of-view, and phototoxicity. To overcome these limitations, data-driven microscopes incorporate feedback loops between data acquisi- tion and analysis. This review overviews how machine learn- ing enables automated image analysis to optimise microscopy in real-time. We first introduce key data-driven microscopy concepts and machine learning methods relevant to microscopy image analysis. Subsequently, we highlight pioneering works and recent advances in integrating machine learning into mi- croscopy acquisition workflows, including optimising illumina- tion, switching modalities and acquisition rates, and triggering targeted experiments. We then discuss the remaining challenges and future outlook. Overall, intelligent microscopes that can sense, analyse, and adapt promise to transform optical imaging by opening new experimental possibilities. data-driven | reactive microscopy | image analysis | machine learning Correspondence: (H. S. Heil) hsheil@igc.gulbenkian.pt, (R. Henriques) rjhen- riques@igc.gulbenkian.pt r.henriques@ucl.ac.uk Introduction Optical microscopy techniques, such as brightfield, phase contrast, fluorescence, and super-resolution imaging, are widely used in life sciences to obtain valuable spatiotemporal information for studying cells and model organisms. How- ever, these techniques have certain limitations with respect to critical parameters such as resolution, acquisition speed, signal to noise ratio, field of view, extent of multiplexing, z- depth dimensions and phototoxicity. The trade-offs between these critical imaging parameters are often represented within a "pyramid of frustration" (Fig. 1A). Although improving hardware can extend capabilities, optimal balancing depends on the imaging context. Especially, as scientific research delves into more complex questions, trying to understand the mechanisms of cell and infection biology at a molecular level in physiological context, traditional static microscopes may not be sufficient to capture relevant dynamics or rare events. Innovative efforts focus on overcoming these restrictions through integrated automation. Data-driven microscopes em- ploy real-time data analysis to dynamically control and adapt acquisition (Fig. 1B). The core concept involves introducing automated feedback loops between image-data interpretation and microscope parameters tuning. Quantitative metrics ex- tracted via computational analysis then dictate adaptive pro- tocols tailored to phenomena of interest. The system reacts to predefined observational triggers by optimising imaging Data-driven microscope: The data-driven microscope integrates advanced com- putational techniques into its imaging capabilities. It uses machine learning algo- rithms and real time data analysis to automatically adjust the acquisition parame- ters. This way it is possible to optimise imaging conditions, enhance image quality and extract meaningful information without heavy reliance on manual intervention. parameters - such as excitation, stage position, and objective lenses - to capture critical events efficiently (Fig. 1C). Image analysis algorithms are pivotal in data-driven method- ologies with customised approaches serving a large variety of situations. These approaches can use machine learning techniques to perform tasks such as classification, segmen- tation, tracking, and reconstruction without the need for ex- plicit programming. By integrating machine learning, intel- ligent microscopes can make contextual decisions by identi- fying subtle features that traditional rule-based software may miss. Thus, these data-driven principles are able to increase the efficiency of image acquisition and enrich the informa- tion contents in diverse scenarios, especially in high through- put and high content imaging. It enables to capture discrete and rare events at different temporal and spatial scales and relate it to population behaviour. This information cannot be accessed with classical imaging approaches, especially be- cause they would require extended exposure to cell damaging imaging conditions (1). In this review, we will first introduce the concept of data- driven microscopy and the common methods used to address microscopy challenges. Then, we will explain the principles and frameworks that enable reactive machine learning-based data-driven systems. Finally, we will showcase various ap- plications that benefit from the integration of data-driven mi- croscopy to highlight new experimental possibilities. Morgado et al. | arXiv | 1–7 ----!@#$NewPage!@#$---- Fig. 1. Data-driven microscopy workflow. (A) Pyramid of frustration: Trade-offs in acquisition parameters are visualised as a pyramid, highlighting the interdependence of signal-to-noise ratio (SNR), sample health, temporal resolution, spatial resolution, and the extend to the field of view, 3D volume and multiplexing (x, y, z, ⁄ dimensions). Enhancing one parameter typically compromises at least one other; (B) Schematic of workflow: Acquisition control software: software-driven control of microscope hardware for image capture; Microscope hardware: imaging devices with programmatic interface; Custom reactive agent: Integration of a custom reactive agent that analyses live acquisition data, providing real-time feedback to the software to adapt the acquisition parameters; (C) Acquisition stages: Observation stage (Blue): The sample is continuously monitored using a simple imaging protocol, such as brightfield. This stage is non-invasive and preserves sample health; Reactive stage (Magenta): Upon detection of a target event (e.g., a cell entering a specific cell-cycle stage), the reactive agent initiates a fluorescence imaging protocol, enabling detailed observation of the event. Data-Driven Microscopy Data-driven microscopes can analyse imaging data in real- time and execute predefined actions upon specific triggers. These reactive systems feature feedback loops between quan- titative image analysis and microscope control, which allows them to tailor data acquisition to objects or phenomena of interest. Specifically for event-driven approaches, this trig- ger can be based on detecting the occurrence of a specific event. By implementing the prediction of states of interest even smart or intelligent microscopy approaches can be re- alised. A recent work showcasing event-triggered protocol switching is by Oscar André et al. (2). They, for example, performed dual-scale imaging of host-pathogen interactions using a co- culture model. The system first continuously scanned multi- ple fields of view of the sample at low magnification to mon- itor interactions between fluorescently labelled human cells and bacteria. An integrated algorithm analysed each frame to detect interaction events based on proximity analysis. Upon detecting a target number of cell-bacteria interactions, the system automatically switched to a higher numerical aperture objective and acquisition speed and imaged the identified in- teractions. This allowed to capture the cellular actin remod- elling induced by the infection at high temporal-spatial res- olution. This dual-scale approach balances population-level behavioural monitoring and targeted high-resolution data col- lection in a highly efficient and high-content manner. In super-resolution microscopy, data-driven strategies help mitigate inherent trade-offs between resolution, speed, field of view and phototoxicity during live imaging. A system by Jonatan Alvelid et al. (3) combines fast widefield surveil- lance with precisely targeted nanoscopy imaging. For in- stance, cultured neurons expressing genetically encoded cal- cium indicators were continuously monitored with widefield imaging to detect neuronal activity. Real-time analysis of cal- cium dynamics allowed the detection of spike events and lo- calisation of regions of interest. Upon spike detection, the system rapidly positioned and activated Stimulated Emission Depletion (STED) nanoscopy illumination at identified sites to visualise synaptic vesicle dynamics. By limiting high- intensity light exposure spatially and temporally only when critical events occurred, this selective super-resolution imag- ing approach reduced cumulative photon dose by over 75% compared to continuous STED acquisition. Beyond adjusting microscope hardware, data-driven systems can coordinate external experimental devices by integrating microfluidics control software. An automated live-to-fixed cell imaging platform called NanoJ-Fluidics, developed by Pedro Almada et al. (4), performs buffer exchange directly on the microscope stage. The system uses simple epifluo- rescence image analysis to detect cell rounding at the on- set of mitosis. Upon rounding detection, NanoJ-Fluidics triggers fixation, permeabilization, and fluorescent labelling through sequential perfusion, preparing the cells for subse- quent super-resolution imaging. These examples showcase the reliance on traditional image 2 | arXiv Morgado et al. | Data-driven microscopy mini-review ----!@#$NewPage!@#$---- Fig. 2. Overview of machine learning concepts for microscopy image analysis. (A) Schematic depicting that deep learning is a subset of machine learning, which is in turn a subset of artificial intelligence. (B) Schematic of major steps on training and using a machine learning model. First, data is collected, pre-processed and split into training, validation and test datasets. Models are trained on the training dataset and training is evaluated with the validation set to prevent overfitting. Once trained, a quality control of the model is done using an independent test dataset and if positive, the model can be used to generate predictions on new unseen data. analysis techniques to identify events of interest, which typi- cally involve signal colocalization, intensity, or shape thresh- olds. However, the integration of machine learning-based im- age analysis can elevate reactive data-driven microscopy to a new level by enabling the detection of subtle and complex features that would otherwise go unnoticed. Machine learning for automated microscopy image analysis Recent advances in machine learning, particularly in deep learning neural networks, have revolutionised automated im- age analysis for microscopy (5, 6). By training on a suffi- cient amount of data, machine learning models can achieve or surpass human performance in complex image process- ing tasks such as cell identification, structure segmentation, motion tracking, and signal or resolution enhancement. Dif- ferent models excel in various aspects crucial for enhancing microscopy imaging experiments. In this section, we will introduce fundamental machine learning concepts and high- light learning strategies well-suited for microscopy imaging tasks. Machine learning involves algorithms that learn patterns from data to make predictions without explicit programming. It falls under the umbrella of artificial intelligence, aiming to imitate intelligent behaviour (Fig. 2A). Through a training process, the algorithms tune the parameters of a specific im- age processing model to perform one particular task. Thus, machine learning practice requires training data, validation data and test data. The latter two dataset are used to evaluate the performance of the model during and after the training, respectively. Upon a positive quality evaluation result, the model can be used in new unseen data to make the predic- tions (Fig. 2B). In supervised learning, the model is trained on matched input and output examples, like images and la- bels, to infer general mapping functions. Unsupervised learn- ing finds intrinsic structures within unlabelled data through techniques like clustering. As a third training category, self- supervised methods run with unlabelled data as they derive supervisory features from natural characteristics of the data itself. A relatively simple but powerful machine learning model is the support vector machine (SVM) (7) (Figure 3). SVMs excel at classification tasks such as identifying cell types in images. SVMs plot each image as a point in a multidimen- sional feature space and tries to find the optimal dividing hy- perplane between classes. New images are classified based on which side of the hyperplane their features fall on. SVMs have good generalisation ability provided that the features ex- tracted from the classes are descriptive enough as to charac- terise them. In microscopy, SVMs are often used for initial proof-of-concept experiments to classify images into binary categories like mitotic or non-mitotic cells. Their simplic- ity makes SVMs convenient for implementing basic feedback loops, for instance, triggering high-resolution imaging when a specific cell type is detected. Deep learning models including convolutional neural net- works (CNNs) are state-of-the-art for complex image pro- cessing tasks. CNNs are made up of artificial neurons trained to recognise patterns in image data. One of the most influ- ential CNN architectures is the U-Net (8) (Figure 3), which was first introduced in 2015. U-Nets have encoder layers that capture hierarchical contextual information and down sam- ple the data, and decoder layers which rebuild the informa- tion back into a detailed map using information from the en- coder path passed through the skip connections. Compared to SVMs, U-Nets can handle raw images by automatically extracting a rich feature representation, and therefore, it per- forms better on datasets with increased complexity. In mi- croscopy, U-Nets excel at segmentation tasks like identify- ing and delineating different cell types, nuclei or components in the image. Their ability to recognise complex structures based on contextual understanding of images makes U-Nets well-suited for implementing data-driven microscopy feed- back loops. For example, U-Nets could be used to alter il- lumination or magnification when specific cellular structures are detected. An additional powerful class of machine learning approaches gaining traction in microscopy are generative adversarial net- works (GANs) (9) (Figure 3). GANs contain paired genera- tor and discriminator networks trained in an adversarial man- ner. The generator creates synthetic images to mimic real data, while the discriminator classifies images as real or fake. Competing drives the generator to produce increasingly re- Morgado et al. | Data-driven microscopy mini-review arXiV | 3 ----!@#$NewPage!@#$---- Fig. 3. Comparative analysis of machine learning algorithms in data-driven microscopy. Comparison of various machine learning (ML) algorithms employed in data- driven microscopy, delineating their respective advantages, limitations, and applications for image analysis tasks. alistic outputs. In microscopy, GANs are applied for data augmentation, image enhancement, modality translation, and simulation. For instance, GANs can create diverse training data, convert brightfield to fluorescence-like images, or sim- ulate images under different conditions. A major benefit of GANs is that it can be unsupervised and thus no labelled or paired datasets are needed to train them. For smart mi- croscopes, GANs could enable pre-processing loops to im- prove image quality before analysis and provide an estimate of variability or confidence in the generated results to priori- tise tasks. They also hold promise for predicting nanoscale information to guide super-resolution imaging. Machine learning provides data-driven microscopy with flex- ibility and empowers faster and more adaptative imaging workflows. Trained models extract relevant information from images that is then used to optimise data collection by adapt- ing microscope parameters accordingly in real-time. This transformative potential has been demonstrated across di- verse imaging modalities, as highlighted in the next section. Applications of machine learning powered reactive microscopy Artificial intelligence is driving the development of intelli- gent microscopes that can sense, analyse, and adapt in real- time. Recent innovations have demonstrated reactive imag- ing systems across various modalities, ranging from wide- field to super-resolution microscopy. In this section, we will discuss the key applications of machine-learning-powered re- active microscopy, highlighting the potential of these systems to revolutionise optical imaging. MicroPilot (10), a software that provides a framework for data-driven microscopy, is one of the pioneering works in the field. The system is based on LabView and C for im- age analysis and can be implemented in various commercial systems. It is also compatible with Micro-Manager (11), an open-source tool widely used to control microscopes. The MicroPilot study provides different examples of how cells can be monitored in low-resolution mode, and images can be analysed using a SVM trained to classify different mitotic 4 | arXiv Morgado et al. | Data-driven microscopy mini-review ----!@#$NewPage!@#$---- stages. A complex experiment is triggered once a cell in a de- sired stage is detected. After completion, the imaging returns to scanning mode until the next detection. To demonstrate its capacity, an experiment was conducted to study the potential role of a specific protein in the condensation of mitotic chro- mosomes. The experiment monitored 3T3 cells and it trig- gered Fluorescence Recovery After Photobleaching (FRAP) acquisitions upon identification of each prophase cell. Half of the nucleus was selectively photobleached, and the signal recovery was monitored. It is worth noting that this set of experiments was completed in just four unattended nights, generating results equivalent to what would have taken a full month for an expert user. Building on this concept, MicroMator (12) was developed to offer an open-source toolkit for reactive microscopy us- ing Python and Micro-Manager. It includes pre-trained U- Net models to segment yeast and bacteria cells. Researchers applied MicroMator to selectively manipulate targeted cells during live imaging. One noteworthy example involves an optogenetically-driven recombination experiment in yeast. In this experiment, genetically modified yeast cells are selected for exposure to light, triggering recombination and the subse- quent expression of a protein that arrests growth together with a fluorescent protein for monitoring purposes. To generate islets of recombined yeast, MicroMator’s algorithm individ- ually selects yeast cells at a minimum distance apart, tracks them and repeatedly triggers light exposure on them, increas- ing the chances of recombination and, thus, the amount of relevant information in the acquired data. In addition to enhancing data information density, the image quality can be dynamically optimised based on the sample properties. One example of this is the learned adaptive mul- tiphoton illumination (LAMI) (13) approach, which uses a physics-informed machine learning method to estimate the optimal excitation power to maintain a good SNR across depth in multiphoton microscopy. This becomes particularly relevant for non-flat samples with varying scattering regions. Given the surface characteristics of the sample, LAMI selec- tively adjusts the excitation power where needed, effectively expanding the imaging volume by at least an order of magni- tude, while minimising the potential for photodamage effects. The effectiveness is also demonstrated by observing immune cell responses to vaccination in a mouse lymph node with live intravital multiphoton imaging. Furthermore, LAMI signifi- cantly reduces computation time by incorporating a machine learning-based method instead of a purely physics-based ap- proach. The computation time is reduced from approximately one second per focal time point to less than one millisecond. In a study conducted by Suliana Manley and her team, they aimed to image mitochondria division using Structured Illu- mination Microscopy while minimising photodamage effects (14). To achieve this, they trained a U-Net to detect sponta- neous mitochondria divisions in dual-colour images labelling mitochondria and the mitochondria-shaping dynamin-related protein 1. The model was integrated into the imaging work- flow to trigger interchangeably the acquisition mode from a slow imaging rate, suitable for live-cell observation, to a faster imaging rate, enabling the collection of higher time- resolved data of mitochondrial fission. Interestingly, the sim- ilarity in the morphological characteristics and protein accu- mulation at fission sites allowed the network to be repurposed for detecting fission events in the bacteria C. crescentus with- out additional training. The research team quantitatively as- sessed and compared photobleaching decay among the slow, fast, and event-driven acquisitions. When compared to the fast mode, they observed a significant reduction in photo- bleaching using the event driven acquisition mode, thereby extending the duration of imaging experiments. As expected, this reduction is not as big as with the slow acquisition mode, but it comes with the benefit of capturing the event with higher temporal resolution and thus the measurement of an average smaller constriction diameters that would have been otherwise missed. Lastly, the work of Flavie Lavoie-Cardinal’s research group focuses on capturing the remodelling process of dendritic F- actin, transitioning from periodic rings to fibres, within living neurons with STED imaging (15). They monitor cells with confocal microscopy based on which synthetic STED im- ages are generated, considerably reducing photodamage ef- fects. For this, they employ a task-assisted generative adver- sarial network (TA-GAN). TA-GAN’s training is strengthen by also considering the error of actin ring and fibres segmen- tation in the synthetic images. During acquisition, the sys- tem estimates the uncertainty of the model predicting syn- thetic images to decide whether to initiate a real STED im- age acquisition and fine-tune the generative model if needed. This allows to track actin remodelling in stimulated neurons at high accuracy and resolution. Their results illustrate that this strategy can potentially reduce the overall light exposure by a significant margin, up to 70% and importantly, they man- age to acquire biologically relevant live super-resolution time lapse images for 15 minutes. By integrating machine learning into the microscopy work- flow, researchers have showcased techniques to enhance data quality and quantity while minimising phototoxicity. The applications highlighted in this section demonstrate the transformative potential of machine learning-powered mi- croscopes across diverse imaging modalities and biological questions. As machine learning methods and computational power continue advancing, we can expect even more break- throughs in intelligent microscopy, bringing us closer to the goal of fully automated, optimised imaging platforms that ac- celerate biological discovery. Conclusions and outlook Data-driven microscopy has demonstrated impressive capa- bilities in optimising illumination, modality switching, acqui- sition rates, and event-triggered imaging. These approaches improve image acquisition’s efficiency and information con- tent, enabling studying dynamic biological processes across different scales. Intelligent microscopes offer new experi- mental possibilities, from observing rare neuronal activity at the nanoscale resolution to studying immune cell dynamics in tissues. Morgado et al. | Data-driven microscopy mini-review arXiV | 5 ----!@#$NewPage!@#$---- However, realising the full potential of data-driven mi- croscopy requires addressing technical and practical chal- lenges. One major limitation is the need for robust and accu- rate machine learning models, especially when dealing with small microscopy datasets. Expanding open-source reposi- tories of annotated images and simulations can facilitate the development and validation of new algorithms. Additionally, incorporating unsupervised and self-supervised techniques shows promise in overcoming the scarcity of labelled data. Another critical aspect is the design of microscope hard- ware optimised for data-driven imaging. Retrofitting analysis and control modules into traditional systems is common, but purpose-built instrumentation that integrates software, optics, detectors, and automation is essential. For example, spatial light modulators can enable rapid adaptable illumination for optimal signal-to-noise ratio across different samples. On the detection side, high-speed, low-noise cameras or point- scanning systems tailored for live imaging can enhance ac- quisition speeds. In order to increase the use of data-driven microscopy soft- ware, it needs to be made more user-friendly and accessible. This can be achieved by creating simplified interfaces for de- signing and executing reactive imaging experiments, allow- ing non-experts to take advantage of these advanced methods. Expanding open-source platforms like Micro-Manager will encourage community contributions and drive innovation. Additionally, package managers, such as BioImage Model Zoo (16), ZeroCostDL4Mic (17), and DL4MicEverywhere (18), that facilitate the sharing and installation of pre-trained models can help overcome barriers in deploying machine learning solutions. As data-driven microscopy moves beyond proof-of-concept studies, ensuring the robustness and reproducibility of au- tonomous microscopes becomes crucial. Maintaining image quality control and detecting failures during unsupervised op- eration for extended duration is challenging. Detailed perfor- mance benchmarking across laboratories using standardised samples can help identify best practices. While this approach can be a great asset in minimising user bias, a selection bias in decision making can still arise. Here, extensive validation of machine learning predictions and adaptive decisions is re- quired to build trust in intelligent systems. Data-driven microscopy represents a new era for optical imaging, overcoming inherent limitations through real-time feedback and automation. Intelligent microscopes have the potential to transform bioimaging by opening up new experi- mental possibilities. Pioneering applications demonstrate the ability to capture dynamics, rare events, and nanoscale ar- chitecture by optimising acquisition on-the-fly. While chal- lenges in robustness, accessibility, and validation remain, the future looks promising for microscopes that can sense, anal- yse, and adapt autonomously. We envision data-driven plat- forms becoming ubiquitous tools that empower researchers to image smarter, not just faster. The next generation of au- tomated intelligent microscopes will provide unprecedented spatiotemporal views into biological processes across scales, fuelling fundamental discoveries. ACKNOWLEDGEMENTS This work was supported by the Gulbenkian Foundation (LM, EGM, HSH, RH), received funding from the European Union through the Horizon Europe pro- gram (AI4LIFE project with grant agreement 101057970-AI4LIFE, and RT-SuperES project with grant agreement 101099654-RT-SuperES to R.H.) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001332 to R.H.). Views and opin- ions expressed are however those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting au- thority can be held responsible for them. This work was also supported by the European Molecular Biology Organization (EMBO) (Installation Grant EMBO-2020- IG-4734 to RH and the postdoctoral fellowships ALTF 499-2021 to HSH and ALTF 174-2022 to EGM), the Fundação para a Ciência e Tecnologia, Portugal (FCT fel- lowship CEECIND/01480/2021 to HSH), the Chan Zuckerberg Initiative Visual Pro- teomics Grant (vpi-0000000044 with DOI:10.37921/743590vtudfp to R.H.) and the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foun- dations (Chan Zuckerberg Initiative Napari Plugin Foundations Grant Cycle 2, NP2- 0000000085 granted to R.H.). R.H. also acknowledges the support of LS4FUTURE Associated Laboratory (LA/P/0087/2020). LM is kindly supported by a collabora- tion between Abbelight and the Integrative Biology and Biomedicine (IBB) PhD pro- gramme from Instituto Gulbenkian de Ciência. EXTENDED AUTHOR INFORMATION • Leonor Morgado: 0000-0003-4510-8456; ￿ALeonorMorgado • Estibaliz Gómez-de-Mariscal 0000-0003-2082-3277; ￿gomez_mariscal • Hannah S. Heil: 0000-0003-4279-7022; ￿Hannah_SuperRes • Ricardo Henriques: 0000-0002-2043-5234; ￿HenriquesLab AUTHOR CONTRIBUTIONS L.M., H.S.H., and R.H. conceptualised the majority of the manuscript. L.M. wrote the manuscript with input from H.S.H, and R.H.. E.G-M. contributed critical com- ments, and conceptual suggestions to improve the manuscript. All authors reviewed and edited the manuscript. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. Bibliography 1. Estibaliz Gómez de Mariscal, Mario Del Rosario, Joanna W Pylvänäinen, Guillaume Jacquemet, and Ricardo Henriques. Harnessing artificial intelligence to reduce phototoxicity in live imaging. arXiv, 8 2023. 2. Oscar André, Johannes Kumra Ahnlide, Nils Norlin, Vinay Swaminathan, and Pontus Nordenfelt. Data-driven microscopy allows for automated context-specific acquisition of high-fidelity image data. Cell Reports Methods, 3, 3 2023. ISSN 26672375. doi: 10.1016/j.crmeth.2023.100419. 3. Jonatan Alvelid, Martina Damenti, Chiara Sgattoni, and Ilaria Testa. Event-triggered sted imaging. Nature Methods, 19:1268–1275, 10 2022. ISSN 15487105. doi: 10.1038/ s41592-022-01588-y. 4. Pedro Almada, Pedro M. Pereira, Siân Culley, Ghislaine Caillol, Fanny Boroni-Rueda, Christina L. Dix, Guillaume Charras, Buzz Baum, Romain F. Laine, Christophe Leterrier, and Ricardo Henriques. Automating multimodal microscopy with nanoj-fluidics. Nature Communications, 10, 12 2019. ISSN 20411723. doi: 10.1038/s41467-019-09231-9. 5. Erick Moen, Dylan Bannon, Takamasa Kudo, William Graf, Markus Covert, and David Van Valen. Deep learning for cellular image analysis. Nature Methods, 16:1233–1246, 12 2019. ISSN 15487105. doi: 10.1038/s41592-019-0403-1. 6. Joanna W. Pylvänäinen, Estibaliz Gómez de Mariscal, Ricardo Henriques, and Guillaume Jacquemet. Live-cell imaging in the deep learning era. Current Opinion in Cell Biology, 85, 12 2023. ISSN 18790410. doi: 10.1016/j.ceb.2023.102271. 7. Corinna Cortes, Vladimir Vapnik, and Lorenza Saitta. Support-vector networks, 1995. 8. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. volume 9351, pages 234–241. Springer Verlag, 2015. ISBN 9783319245737. doi: 10.1007/978-3-319-24574-4_28. 9. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. 6 2014. 10. Christian Conrad, Annelie Wünsche, Tze Heng Tan, Jutta Bulkescher, Frank Sieckmann, Fa- tima Verissimo, Arthur Edelstein, Thomas Walter, Urban Liebel, Rainer Pepperkok, and Jan Ellenberg. Micropilot: Automation of fluorescence microscopy-based imaging for systems biology. Nature Methods, 8:246–249, 3 2011. ISSN 15487091. doi: 10.1038/nmeth.1558. 11. Arthur Edelstein, Nenad Amodaj, Karl Hoover, Ron Vale, and Nico Stuurman. Computer control of microscopes using µmanager. Current Protocols in Molecular Biology, 10 2010. ISSN 19343639. doi: 10.1002/0471142727.mb1420s92. 12. Zachary R. Fox, Steven Fletcher, Achille Fraisse, Chetan Aditya, Sebastián Sosa-Carrillo, Julienne Petit, Sébastien Gilles, François Bertaux, Jakob Ruess, and Gregory Batt. En- abling reactive microscopy with micromator. Nature Communications, 13, 12 2022. ISSN 20411723. doi: 10.1038/s41467-022-29888-z. 13. Henry Pinkard, Hratch Baghdassarian, Adriana Mujal, Ed Roberts, Kenneth H. Hu, Daniel Haim Friedman, Ivana Malenica, Taylor Shagam, Adam Fries, Kaitlin Corbin, Matthew F. Krummel, and Laura Waller. Learned adaptive multiphoton illumination mi- croscopy for large-scale immune response imaging. Nature Communications, 12, 12 2021. ISSN 20411723. doi: 10.1038/s41467-021-22246-5. 6 | arXiv Morgado et al. | Data-driven microscopy mini-review ----!@#$NewPage!@#$---- 14. Dora Mahecic, Willi L. Stepp, Chen Zhang, Juliette Griffié, Martin Weigert, and Suliana Manley. Event-driven acquisition for content-enriched microscopy. Nature Methods, 19: 1262–1267, 10 2022. ISSN 15487105. doi: 10.1038/s41592-022-01589-x. 15. Catherine Bouchard, Theresa Wiesner, Andréanne Deschênes, Anthony Bilodeau, Benoît Turcotte, Christian Gagné, and Flavie Lavoie-Cardinal. Resolution enhancement with a task-assisted gan to guide optical nanoscopy image analysis and acquisition. Nature Ma- chine Intelligence, 5:830–844, 8 2023. ISSN 25225839. doi: 10.1038/s42256-023-00689-3. 16. Wei Ouyang, Fynn Beuttenmueller, Estibaliz Gómez-De-Mariscal, Constantin Pape, Tom Burke, Carlos Garcia-López-De-Haro, Craig Russell, Lucía Moya-Sans, Cristina De-La- Torre-Gutiérrez, Deborah Schmidt, Dominik Kutra, Maksim Novikov, Martin Weigert, Uwe Schmidt, Peter Bankhead, Guillaume Jacquemet, Daniel Sage, Ricardo Henriques, Arrate Muñoz-Barrutia, Emma Lundberg, Florian Jug, and Anna Kreshuk. Bioimage model zoo: A community-driven resource for accessible deep learning in bioimage analysis. bioRxiv, 2022. doi: 10.1101/2022.06.07.495102. 17. Lucas von Chamier, Romain F. Laine, Johanna Jukkala, Christoph Spahn, Daniel Krentzel, Elias Nehme, Martina Lerche, Sara Hernández-Pérez, Pieta K. Mattila, Eleni Karinou, Séa- mus Holden, Ahmet Can Solak, Alexander Krull, Tim Oliver Buchholz, Martin L. Jones, Loïc A. Royer, Christophe Leterrier, Yoav Shechtman, Florian Jug, Mike Heilemann, Guil- laume Jacquemet, and Ricardo Henriques. Democratising deep learning for microscopy with zerocostdl4mic. Nature Communications, 12, 12 2021. ISSN 20411723. doi: 10.1038/s41467-021-22518-0. 18. Iván Hidalgo-Cenalmor, Joanna W Pylvänäinen, Mariana G Ferreira, Craig T Russell, Igna- cio Arganda-Carreras, Ai4life Consortium, Guillaume Jacquemet, Ricardo Henriques, and Estibaliz Gómez-De-Mariscal. Dl4miceverywhere: Deep learning for microscopy made flex- ible, shareable, and reproducible. bioRxiv, 2023. doi: 10.1101/2023.11.19.567606. Morgado et al. | Data-driven microscopy mini-review arXiV | 7