OPINION SUBJECT COLLECTION: IMAGING Harnessing artificial intelligence to reduce phototoxicity in live imaging Estibaliz Gómez-de-Mariscal1,*, Mario Del Rosario1,*, Joanna W. Pylvä nä inen2, Guillaume Jacquemet2,3,4,5 and Ricardo Henriques1,6,‡ ABSTRACT Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results – particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed – AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy. KEY WORDS: Photodamage, Phototoxicity, Live-microscopy, Artificial intelligence, Deep learning, Data-driven microscopy, Fluorescence microscopy, Live-cell super-resolution microscopy Introduction The ability to comprehend biological events is inherently linked to the capacity for non-invasive observation. Fluorescence microscopy has been instrumental in facilitating these analyses across a range of scales (Heimstädt, 1911; Lehmann, 1913; Reichert, 1911). Over the past two decades, technological advancements, such as light-sheet microscopy (Dodt et al., 2007; Huisken et al., 2004; Reynaud et al., 2008; Verveer et al., 2007), structured illumination microscopy (SIM) (Gustafsson, 2000; Heintzmann and Huser, 2017) and single- molecule localisation microscopy (SMLM) (Betzig et al., 2006; Hess et al., 2006; Lelek et al., 2021), have revolutionised fluorescence light microscopy, enabling us to characterise biological events from molecular interactions up to larger living organisms. Advanced microscopy imaging generally needs high levels of fluorescence excitation light, which results in phototoxicity or photodamage. These terms refer to the detrimental impacts of light, especially when employing photosensitising agents or high-intensity illumination, and are a key challenge for live microscopy imaging (see https://focalplane.biologists.com/2021/05/14/phototoxicity-the- good-the-bad-and-the-quantified/ and Reiche et al., 2022; Tinevez et al., 2012; Wäldchen et al., 2015). Although toxicity is only an issue for living systems, photodamage also occurs in non-living materials and thus, for simplicity, both terms are used here interchangeably. Sample illumination might also result in photobleaching, a process characterised by an irreversible loss of a fluorescent signal attributed to the destruction of the fluorophore. This is one manifestation of light damage, among other possible effects. Phototoxicity severely influences the experimental outcomes by altering biological processes under observation, skewing findings and impeding consistency (Alghamdi et al., 2021). Therefore, it is crucial during live-cell microscopy to carefully consider these factors to prolong the duration of imaging and achieve dependable research outcomes (Icha et al., 2017; Kiepas et al., 2020; Laissue et al., 2017; Mubaid and Brown, 2017; Tosheva et al., 2020). The biological validity of live-cell imaging experiments requires a precise balance between acquiring high quality data that can be analysed and maintaining the health of the specimen (depicted in Fig. 1). Major advancements have been made in both hardware and software technologies, aiming to reduce light damage of the sample. Importantly, super-resolution techniques, such as stimulated emission depletion (STED), achieves nanoscale spatial resolution by eliminating the diffraction barrier, at the cost of damaging the sample due to the high illumination intensity required (Hell and Wichmann, 1994). Reversible saturable optical fluorescence transition (RESOLFT) overcomes the limitation of STED, that is the high degree of photobleaching and photodamage of the sample, as it requires much lower light intensities that are comparable to those used in confocal microscopy (Hofmann et al., 2005; Ratz et al., 2015) (Table 1). Hardware innovations, such as lattice light sheet (LLS) microscopy (Chen et al., 2014) and Airyscan microscopy (Huff, 2015) are notable examples of gentler acquisition approaches for the sample health that still accomplish high resolutions (Table 1). Additionally, computational advancements such as fluctuation- based super resolution microscopy offer promising solutions to photodamage (Dertinger et al., 2009; Gustafsson et al., 2016; Laine et al., 2023). A recent study has shown that a two-colour illumination scheme combining near-infrared illumination with fluorescence excitation has the capacity to limit the phototoxicity caused by light- induced interactions with fluorescent proteins (Ludvikova et al., 2023). These technological breakthroughs have the potential to optimise observation accuracy while mitigating photodamage. 1Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal. 2Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland. 3Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland. 4Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland. 5InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland. 6UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK. *These authors contributed equally to this work ‡Author for correspondence (rjhenriques@igc.gulbenkian.pt) E.G., 0000-0003-2082-3277; M.D., 0000-0002-0430-1463; G.J., 0000-0002- 9286-920X; R.H., 0000-0002-2043-5234 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. 1 © 2024. Published by The Company of Biologists Ltd | Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- In parallel, artificial intelligence (AI), specifically deep learning, can significantly improve imaging information and analysis in low- illumination scenarios by considerably enhancing image quality and quantification (Belthangady and Royer, 2019; Melanthota et al., 2022; Tian et al., 2021). This has inspired the search for integrated solutions by the microscopy community (Bouchard et al., 2023; Ebrahimi et al., 2023; McAleer et al., 2021; https://www. microscope.healthcare.nikon.com/en_EU/resources/application-notes/ reduction-of-phototoxicity-of-fluorescent-images; Wagner et al., 2021). The fusion of advanced optical hardware with computational models and AI heralds new breakthroughs in overcoming the sample damage that is induced by traditional live fluorescence microscopy methodologies, marking the advent of AI-enhanced smart microscopy (Fig. 1). In this Opinion piece, we will first examine the mechanisms of phototoxicity and strategies for its quantification. Next, we delve into how deep learning can enhance microscopy image analysis, while supporting more sample-friendly imaging setups. Finally, we explore smart microscopes that integrate deep learning to balance sample health and data quality in real-time acquisitions (Fig. 1). Throughout, we aim to make the case that, although computational advances are powerful, we must ensure that biological relevance is the central focus. As AI continues to enhance imaging capabilities, we must maintain sight of the overarching goal – to uncover biological truths without or with only minimal disturbance. Rather than blindly pushing the physical limits of microscopy, future AI- enabled technologies should be designed to extract maximal information through minimal invasiveness. Universal standard metrics of photodamage would aid this pursuit, enabling quantitative assessments of imaging protocols. We argue that embracing this balanced perspective is crucial for developing microscopes that truly observe life with minimal perturbance. Our aim is to emphasize that striking the right equilibrium between sample health and data quality will allow AI to fully realise the promise of gentle yet highly informative live-cell fluorescence microscopy. Phototoxicity quantification Fluorescence microscopy uses fluorescent reporters to visualize cell components and activities (Heimstädt, 1911; Lehmann, 1913). However, exciting fluorophores with light inevitably enhances the generation of reactive oxygen species (ROS) through interactions with ambient oxygen. At physiological levels, ROS participate in signalling and are present in regular cellular processes. However, excessive ROS result in oxidative stress and perturb the biological processes under observation – an effect termed phototoxicity or photodamage when these are caused by light (Icha et al., 2017; Laissue, 2021; Reiche et al., 2022). The primary ROS-related molecules include hydroxyl radicals, hydrogen peroxide, nitric oxide and singlet oxygen, which readily oxidize biomolecules, such as lipids, proteins and DNA (Eichler et al., 2005; Hockberger et al., 1999). Higher-intensity UV and blue excitation light can also directly damage DNA by producing thymine dimers (Zhang et al., 2022b). Additionally, fluorophores photobleach via ROS generation upon light exposure (Demchenko, 2020). Although interrelated, photobleaching and photodamage are distinct and can occur independently (Ludvikova et al., 2023). At the cellular level, accumulating oxidative stress disrupts redox homeostasis and normal physiology (Icha et al., 2017; Tosheva et al., 2020). Effects span mitochondrial fragmentation, cytoskeletal derangements, stalled proliferation and loss of motility (Alam et al., 2022; McDonald et al., 2012; Zhang et al., 2022b). In whole organisms, this manifests as tissue degeneration, developmental defects and apoptosis (Laissue et al., 2017). Considering the varying light energy requirements current microscopy modalities employ, many preventive strategies exist to reduce the effects of phototoxicity, such as limiting light irradiation by reducing the acquisition points or the light dose (Kiepas et al., 2020; Mubaid and Brown, 2017; Reynaud et al., 2008), using light detectors as an array of 32 GaAsP-PMT detectors or highly sensitive sCMOS cameras (Huff, 2015; Saxena et al., 2015) and performing bioluminescence-based assays that reduce the amount of light required (Suzuki et al., 2016). Other strategies focus on controlling oxidative stress effects in biological samples by supplementing antioxidants (Harada et al., 2022 preprint; Kesari et al., 2020) or chemically increasing the oxidative stress resistance of the sample itself (Kunkel et al., 2018). Unfortunately, the degree of photodamage elicited varies based on multiple factors, including sample traits, illumination parameters and imaging modality (Table 1) (Laissue et al., 2017; Reiche et al., 2022; Tinevez et al., Sample health Image information Deep learning- augmented microscopy Response prediction Health estimation Event detection Temporal resolution Spatial resolution Field of view SNR Fig. 1. Integrating deep learning with live-cell microscopy. The delicate balance between sample health and the information obtained by imaging requires a compromise between both elements. Deep learning-augmented microscopy aims to reduce this compromise, striving to obtain equal information from our sample with less impact on its health. Table 1. Light irradiation across microscopy modalities Irradiation range (W/cm2) Irradiation average (W/cm2) Microscopy modality References 1000–20,000 10000 STED Wildanger et al., 2008 1000–10,000 5000 SMLM or RESOLFT Chen et al., 2018; Grotjohann et al., 2011 100–5000 1000 Confocal Icha et al., 2017 50–1000 100 SRRF Culley et al., 2018 5–100 10 TIRF; SIM Kwakwa et al., 2016; Li et al., 2015 0.5–100 5 LLS; wide-field Icha et al., 2017; Schermelleh et al., 2019 2 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- 2012). For example, actively dividing cells better tolerate photodamage than post-mitotic neurons (Stevenson et al., 2006). Additionally, the imaging modality is particularly critical for sample health, as techniques that yield a higher signal-to-noise ratio (SNR) such as super-resolution methods [STED, SMLM, SIM, total internal reflection fluorescence (TIRF) and confocal microscopy], generally require higher light energy than low SNR or standard-resolution methods (LLS or wide-field microscopy), creating a higher negative impact on the sample (Table 1) (Betzig et al., 2006; Blom and Brismar, 2014; Dertinger et al., 2009; Dodt et al., 2007; Gustafsson, 2000; Gustafsson et al., 2016; Heintzmann and Huser, 2017; Hess et al., 2006; Huff, 2015; Klar and Hell, 1999; Lelek et al., 2021). For this reason, designing a strategy to prevent photodamage remains challenging. Namely, there are no universal live-cell imaging metrics that can relate to the light exposure and consequent damages of the sample and that can be used to assess and optimise imaging systems. Quantifying photodamage can be used to tune the acquisition parameters to reduce the fluorescence light illumination and establish a doable compromise between the sample health and, accordingly, image information (Fig. 1). Excitingly, a quantitative measurement of phototoxicity can be used to train a smart virtual component that decides automatically towards less aggressive image acquisition parameters and triggers these accordingly in the microscope. Importantly, such universal metrics would also support the reproducibility of biological readouts and improve result robustness. By contrast, without these universal metrics, it is challenging to fully leverage the capacity to image biological systems since we are not assessing potential damage that arises from the imaging itself. This impedes the assessment of experimental conditions to achieve maximum spatial and temporal resolution while preserving cell viability (Box 1). Numerous known markers exist to identify and characterise sample damage based on the previously mentioned phototoxicity hallmarks (Alghamdi et al., 2021; Laissue et al., 2017). However, most of these phototoxicity markers require the use of fluorescence or luminescence light excitation to report back information. Thus, incorporating them might compromise fluorescent channels usually reserved for observing conditions of interest (e.g. markers for DNA oxidative damage) and, when paired with live-cell experiments, could increase the phototoxicity risk on the specimen due to the interaction of light with oxygen radicals. Yet, quantification-based screenings of phototoxicity are less commonly employed than the observation and experience of the researchers to to assess cell health (Laissue et al., 2017; Tosheva et al., 2020; Wäldchen et al., 2015). Although there are some label-free attempts to provide quantifiable metrics for the assessment of imaging setups and support improving sample viability, they often simplify the impact of fluorescence excitation light to a binary classification of viable/ healthy or non-viable/dead (Icha et al., 2017; Richmond et al., 2017 preprint; Tinevez et al., 2012; Wäldchen et al., 2015). By considering the decline of cell health and recovery as valid photodamage stages for an image-based classifier, the assessment of phototoxicity could be more flexible. Here, a gradient model that considers the accumulation of discrete minor effects would more accurately depict the spectrum of effects, as documented in the existing literature. For example, the heartrate in zebrafish embryo development was recently used as a gradual quantitative measure of phototoxicity and used to optimise a multiphoton light-sheet microscopy acquisition set up (Maioli et al., 2020). The ability of deep learning to extract meaningful and general features from big data has enabled image-based cell profiling, phenotyping and even encoding of metastatic potential. One could for example, think about using equivalent techniques to identify, encode and model photodamage based on imaged cell morphology or monitored cell behaviour (Caicedo et al., 2017; Chandrasekaran et al., 2021; Doron et al., 2023 preprint; Wu et al., 2020). Despite these advances in image analysis, general metrics based on cell physiological cues to assess photodamage in live-cell imaging across different biological samples and imaging setups are missing. As advanced image analysis tools become increasingly available, a future strategy could incorporate specific phototoxicity assessment within the automated image acquisition and analysis workflows. We would suggest less aggressive but still sufficiently accurate imaging approaches in terms of resolution and SNR, such as holotomography microscopy, to monitor sample health. These types of automated observations, paired with standardised experimental guidelines for identifying and quantifying phototoxic events, represent a promising solution. We, therefore, argue that adopting such methods should be prioritised by scientists aiming to create robust imaging strategies for visualising biological phenomena. Deep learning for microscopy to the rescue The recent advancements in deep learning have laid a solid foundation for the growing field of deep learning-augmented microscopy (Pylvänäinen et al., 2023), which holds great promise due to the flexibility it introduces for imaging experiments (Belthangady and Royer, 2019; Meijering, 2020; Melanthota et al., 2022; Moen et al., 2019; Tian et al., 2021). Among all the existing techniques for microscopy image processing, many possibilities exist to reduce phototoxicity (Fig. 2). Previous discussions (Tian et al., 2021) divide such techniques between strategies that aim either to surmount the physical limitations intrinsic to live fluorescence microscopy imaging (i.e. acquisition speed or illumination) or to enhance the content in qualitatively less superior but more sample- friendly image data (Box 2). The former includes techniques, such as denoising, restoration or temporal interpolation, which allow reduced light exposure by using lower laser powers or lower acquisition frame rate. The latter, referred to by the original authors as ‘augmentation of microscopy data contrast’, includes techniques such as virtual super-resolution (Chen et al., 2021; Jin et al., 2020; Qiao et al., 2021, 2022; Wang et al., 2019; Zhang et al., 2022a). Box 1. The relationship between fluorescence excitation light, image information and phototoxicity Modern microscopy methods aim to minimise required illumination by targeting specific information for visualisation. However, acquired image quality depends on several factors, including the SNR, contrast and spatiotemporal resolution. Each microscopy technique has inherent limitations that constrain optimising these properties. This necessitates balancing trade-offs between them, described as the ‘microscope pyramid of frustration’ (Scherf and Huisken, 2015; Weigert et al., 2018). For super-resolution microscopy, this trade-off space was recently characterised (Jacquemet et al., 2020; Tosheva et al., 2020). The balance can be tuned to experimental needs by adjusting light exposure and acquisition speed – both common levers to enable gentler imaging. As such, methods such as deep learning that computationally enhance image quality from minimally invasive acquisitions are particularly valuable. They alleviate the trade-off between image information and phototoxicity constraints (Fig. 1). Indeed, the growing capacity of deep learning to refine image-based information is attracting interest from the microscopy community. It is becoming a popular strategy to enable reduced phototoxicity imaging setups (Ebrahimi et al., 2023; Scherf and Huisken, 2015). 3 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- Recent advances in denoising and restoration using deep learning have shown promising capabilities to support live-imaging setups with reduced phototoxicity. For example, these methods can virtually remove noise, enhance SNR and improve fluorescence channel contrast in images acquired under low illumination conditions (Krull et al., 2019, 2020; Weigert et al., 2018; Zhang et al., 2022a) (Fig. 2C). Other techniques can computationally reconstruct isotropic 3D volumetric information from sparse optical sectioning data (Chen et al., 2021; Guo et al., 2020; Li et al., 2023, 2022b; McAleer et al., 2021; Park et al., 2022). Such capabilities allow microscopists to use gentle imaging protocols with reduced fluorescence excitation or sparse Z-stack sampling, while still recovering high-quality image data computationally after acquisition. Specifically, deep learning models can be trained on Model 1: generative (unsupervised) Predicted SIM Confocal 25 µm - SMLM - STED - SIM - SRRF - Confocal - Wide field - Light sheet Prediction Illumination intensity Model 1 Model 2 Self-supervised Degraded Predicted Compare Generative approach (unsupervised) Predicted Supervised 0 s 20 s 40 s 0 s 20 s 40 s Application of the trained model on live microscopy time lapse Real images acquired at low SNR Deep learning-predicted images with high SNR Low SNR High SNR GT F Structural and molecular information enhancement t t+2 t+1 t+4 t t+2 t+1 t+3 t+4 Inferred t+3 Inferred Real Real Real Inferred Inferred Real Real Real E Spatial resolution enhancement D Temporal resolution enhancement C Image contrast enhancement B A Time-lapse frequency Cumulative phototoxicity Illumination intensity Sample size Light intensity Low High Light sheet TIRF Two photon Wide field Confocal STED Wavelength SRRF Predicted SRRF Model 2: supervised Virtual labelling Tom20 Inferred Tom20 10 µm Brightfield 100µm Lifeact-RFP SiR-DNA Prediction Ground truth Inferred SiR-DNA Low quality Low quality Low quality High quality High quality Supervised training Paired imaging on fixed samples Healthy Unhealthy Healthy Unhealthy Healthy Unhealthy Healthy Unhealthy Fig. 2. See next page for legend. 4 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- paired datasets from low- and high-illumination imaging of the same samples (Box 2). The models then learn to enhance contrast and virtually recover the lost information when applied to new low-exposure test data. This strategy to reduce phototoxicity is also being adopted by commercial solutions such as Enhance.ai, currently part of the Nikon NIS-Elements imaging software (https://www. microscope.healthcare.nikon.com/products/software/nis-elements). These approaches are inherently gentler on live specimens because they support imaging setups with reduced phototoxic excitation levels. Similarly, intelligent temporal interpolation techniques, such as content-aware frame interpolator (CAFI) or DBlink, allow for slowing down acquisition frame rates, while accurately reconstructing missing timepoints later using deep learning (Priessner et al., 2021 preprint; Saguy et al., 2023) (Fig. 2D). As discussed above, reducing the number of illumination timepoints can substantially decrease cumulative photodamage. Longer intervals between acquisitions potentially enables biological recovery processes to repair photodamage, further supporting longer term live imaging. However, one should be cautious when using these techniques for further quantifications other than segmentation and tracking, such as intensity-based quantifications (further discussed below). Another innovative approach to enable reduced illumination imaging setups is exploiting cross-modal style transfer methodologies. In brief, these methods involve training a deep learning model to computationally convert the style of an image to mimic that from a different imaging modality (Fig. 2F). For example, it has been shown that SIM images can be inferred from input images that have been acquired with wide-field illumination, which reduces the photon dose by a factor of 9 in 2D and 15 in 3D (Qiao et al., 2021). This capability extends to numerous types of fluorescence microscopy modalities, such as confocal to STED (Bouchard et al., 2023; Wang et al., 2019), SIM and super- resolution radial fluctuations (SRRF) microscopy (von Chamier et al., 2021), or wide-field to SMLM (Macke et al., 2021; Nehme et al., 2018; Ouyang et al., 2018). The enhancement in spatial resolution through learning of the fine details of a sample is similar in objective to traditional deconvolution. It offers comparable benefits for mitigating phototoxicity, as it allows to generate enhanced resolution data from gentler imaging methods (widefield and/or confocal against SIM, STED and SMLM) (Table 1). These techniques have some limitations that are subject to debate. For instance, they might have limited accuracy when predicting Fig. 2. The deep learning landscape for a gentler live-cell microscopy imaging. (A) Comparison of the light intensities of different light microscopy modalities on a sample according to its size. Some modalities such as light- sheet microscopy use a lower amount of light on the sample to increase cell survival. This modality is commonly used to image embryos owing to its reduced phototoxicity, but it provides lower spatial resolution. Other modalities, such as STED sacrifice sample health to gain spatial resolution, as they require high light intensities. Typically, the microscopy imaging setups designed for nanoscale resolution are used on microorganisms, such as cell cultures, and necessitate higher laser powers and are more aggressive for the living matter being imaged. (B) Common approaches to train deep learning methods in the specific context of image denoising (see Box 2). Supervised training requires datasets of paired images so given an input noisy image (low quality), the output of the network (inferred image) is compared with the expected ground truth image (high quality) to compute a loss value and train the network. In generative approaches, the images are not paired so the network learns the distribution of the low quality and high- quality image datasets and how to translate one into the other one. The grey arrowheads pointing at each other represent the cycle in unsupervised generative approaches of translating an image from one distribution into the other one and the translation back using a neural network. Self-supervised approaches are used when only a dataset of low-quality images is available. Here, the input image is transformed to virtually create a paired image that represents a pseudo-ground-truth and can be used to train the network. (C–F) Deep learning-augmented microscopy. Deep learning models can be used to enable microscopy acquisitions that use lower fluorescence light intensities or illuminate the sample less often. Thereafter, the images are processed with a model trained for a specific task. (C) Image contrast enhancement with denoising and restoration. High illumination intensities are used to obtain images with a high signal-to-noise-ratio (SNR) at the expense of causing photobleaching, among other effects that can be detrimental to the sample. Reducing the fluorescence illumination intensity prevents photobleaching but results in images with a low SNR. By using deep learning, the acquisition duration can be extended by lowering the laser power and acquiring images with a low SNR and decreased photobleaching. An image restoration model can be trained on pairs of images from fixed samples, which facilitates creating perfectly aligned pairs of low and high SNR. After training and evaluation, the model can be used to process the more gently acquired low SNR time-lapse movies and enhance contrast, recovering the image quality of high illumination setups. GT, ground truth. (D) Temporal interpolation. Here, the temporal resolution is improved by training a model to predict the intermediate time points between two given frames. (E) Spatial resolution enhancement. A super-resolution model is trained to translate images from one modality (e.g. confocal) into another one (e.g. SIM or SRRF). Depending on the availability of paired images for the training, one should choose between a generative or a supervised learning approach. (F) Structural and molecular information enhancement. The phototoxic effects of light are not equal across wavelengths. Longer wavelengths, such as red light, are less phototoxic than shorter ones, such as UV light. Given that it is not always possible to use less damaging light wavelengths options, one can opt to circumvent the partial use of fluorescence illumination by using virtual labelling approaches that can generate labelling out of existing structures from bright-field, autofluorescence or crosstalk between channels. Images in B, E and F were extracted and modified from von Chamier et al. (2021) and those in C and D from Spahn et al. (2022) which were both published under an CC-BY 4.0 license. Box 2. Guidelines for annotation and model training to optimize deep learning for microscopy image analysis Traditionally, deep learning models are trained in a supervised – using paired input-output image datasets – or unsupervised – the model learns patterns and insights from unlabelled input images without any explicit guidance – manner (Fig. 2B). Supervised approaches have demonstrated superior accuracy and specificity to the task and data distribution, but their versatility requires the availability of paired images. Microscopy imaging allows for creating paired image datasets by alternating acquisition setups (e.g. channels) and combining different modalities (e.g. paired widefield microscopy and SIM) (Qiao et al., 2021; von Chamier et al., 2021), simulating data (Fang et al., 2021; Nehme et al., 2018; Oh and Jeong, 2023; Sage et al., 2019; Saguy et al., 2023) or, recently, by developing correlative approaches, such as correlative light and electron microscopy (CLEM) (de Boer et al., 2015). However, cases remain in which the obtaining of paired input and output images to train deep learning models is still a limitation. For example, in live imaging, paired acquisitions can be complicated by sample movements or photobleaching. Alternatively, one could acquire paired images of ex vivo samples – providing perfectly aligned images for training and assessment – to subsequently perform inference with in vivo images (Fig. 2C) (Spahn et al., 2022; Weigert et al., 2018; Xu et al., 2023). Importantly, collecting images from fixed samples supports the faster creation of more extensive and diverse datasets than live imaging. However, there are scenarios in which such paired datasets do not encapsulate the complexity of live experiments, are not experimentally feasible or where cross-modality acquisition devices are inaccessible. Therefore, this limitation, as well as time-consuming data annotation processes, has propelled the exploration into alternative approaches, such as semi- or weakly supervised (Bilodeau et al., 2022), self- supervised (Krull et al., 2019, 2020) or generative techniques (Li et al., 2022a; Wang et al., 2019; Xu et al., 2023) (Fig. 2B). 5 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- fluorescence intensity, which can make it challenging to measure protein stoichiometry. However, their ability to enhance image quality has a direct effect on subsequent tasks like localization, tracking and segmentation, thereby improving their accuracy (Belthangady and Royer, 2019; Hutson, 2018; Priessner et al., 2021 preprint; Weigert et al., 2018). It has been suggested that less aggressive live imaging approaches might cause aesthetical blur and be less appealing. However, such datasets can comprise easier-to- interpret data due to the reduction in the biological artefacts that are induced by photodamage (e.g. apoptosis, stressed cells or specimen shrinkage during illumination) and have the benefit of preserving close-to physiological conditions (Weigert et al., 2018). By exploiting the capability of data-driven methods, virtual (or artificial) labelling approaches have emerged (Fig. 2F). Virtual labelling uses deep learning to computationally predict fluorescence labelling patterns and signals directly from an image, representing yet another fluorescence channel or from transmitted light images, without actual fluorescent tags. By inferring some of the biological structures through deep learning methods, virtual labelling allows end users to eliminate the most harmful illumination wavelengths from their experiments. For example, cell nuclei (e.g. Hoechst staining excited with an ∼475 nm laser) can be virtually inferred from actin (e.g. Lifeact staining excited with an∼561 nm laser), which results in a halved light dose compared with an imaging setup that illuminates both channels (von Chamier et al., 2021). Likewise, artificial labelling can be employed for spectral unmixing, which offers several key advantages, including illumination channel reduction and acceleration of image acquisition (Fig. 2F) (Jiang et al., 2023 preprint; McRae et al., 2019; Xue et al., 2022). One could also eliminate the need for sample exposure to excitation light by estimating specific fluorescence information (e.g. nucleoli, cell membrane, nuclear envelope, mitochondria or neuron-specific tubulin) from brightfield input images (Christiansen et al., 2018; Ounkomol et al., 2018; von Chamier et al., 2021). It is worth noting that the latter technique is also categorized as a cross-modality style transfer approach. All these approaches enable the acquisition of images with less explicit information that, after virtual labelling, can be quantitatively processed as if the fluorescence information had been acquired. Among these benefits, the former is pivotal in enabling more sample-friendly setups, and indeed, virtual labelling is often suitable as an intermediary step for further quantification, such as segmentation or tracking (Hollandi et al., 2020; von Chamier et al., 2021). Of note, most cited approaches here can now be widely adopted thanks to the different software developments that enable the training, deployment and sharing of deep learning models in a user- friendly manner (Gómez-de-Mariscal et al., 2021; Ouyang et al., 2022 preprint; Spahn et al., 2022; von Chamier et al., 2021). We expect that with the growth of these resources along with a more easily accessible high performing computational power, the deep learning augmented microscopy will be more widely harnessed in image driven life-sciences research. Gentle smart microscopes The integration of AI components directly into the fluorescence microscopy acquisition sequence shows great promise for minimizing photodamage in real-time and enabling accurate observations of biological dynamics. Analogous to autonomous vehicles or intelligent industrial robots, microscopes can incorporate AI capabilities to make real-time decisions by analysing the observed image data and integrating them into an intelligent feedback loop. This loop is responsible for analysing the data that is being observed in real time and updating the imaging parameters (e.g. time-lapse frequency or illumination intensity) based on visual cues; it would balance sample health against image quality to optimize data collection (Figs 1 and 3) (Scherf and Huisken, 2015). For example, in tracking the membrane dynamics of individual cells, the microscope could use a low frame rate to gently image a fluorescent membrane marker, while simultaneously tracking cells via a transmitted light channel at a higher frame rate. This system could identify fast dynamics or ambiguous situations (e.g. two cells moving close together) and balance trade-offs on when and where to increase imaging speed or acquire extra channels, such as nuclear stains, to properly identify each cell. Incorporating quantitative phototoxicity reporter data on cell resilience would further optimize these decisions, as losing a cell track might be preferable to aggressively re-imaging the entire sample. More broadly, developing such smart microscopes is tied to balancing the combination of features (spatiotemporal resolution, SNR, field of view size, fluorescent channels, etc.) that extract the most relevant information against factors that preserve sample health. Likewise, the availability of quantitative metrics for sample health and image information quality, should stimulate the design of AI systems that, after training, would automatically make these decisions driven by the observed data in a smart fashion. There are already a number of conceptualised and proven approaches towards such gentle smart microscopy. Of those, event- driven approaches automatically identify specific objects or incidents in images acquired in a less phototoxic setup, which triggers their acquisition in real-time (Alvelid et al., 2022; André et al., 2023; Chiron et al., 2022; Fox et al., 2022; Mahecic et al., 2022) (Fig. 3). Although these adaptive approaches reduce the induced phototoxicity by increasing the illumination of the sample when needed, in most cases, they are equipped with deep learning models that are trained to recognise predefined objects or elements in the images. Given its complexity, these might not always be present for biology set-ups, limiting or biasing the observation of novel physiological processes. Alternative approaches propose the integration of image resolution enhancement in the image acquisition loop to obtain fasterand gentler setups. For instance, a deep learning model that is trained and validated in the acquisition loop to enhance the volumetric reconstruction of the sample, providing an adaptive light field microscopy (LFM) setup, has been presented (Wagneret al., 2021). In the context of super-resolution imaging, evaluating the quality of virtually inferred STED images from confocal microscopy images has been proposed so that the uncertainty in the observed sample can be determined and a decision on whether a new STED image should be acquired or not made (Bouchard et al., 2023). All these works pose new paradigms in the realm of smart microscopy. Despite sample health preservation being both a strong motivation and a major limitation in live-cell imaging, none of the currently existing solutions can directly analyse, estimate and integrate information on the sample health into the acquisition loop. Robust photodamage reportersthat provide quantitative assessments of sample health without requiring additional fluorescence channels can, therefore, directly contribute to more reproducible biological readouts. This could involve exploring modalities, such as transmitted light microscopy or label-free techniques. Moreover, quantitative reporters could support the design of automated workflows that analyse sample health in real-time during image acquisition rather than only evaluating the image quality (Fig. 3). This would allow the detection of early signs of photodamage and the adaptive determination of optimal imaging conditions. In other words, it will open the door for data-driven sample-oriented live microscopy. 6 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- Pursuing such technical innovations while deepening our understanding of the mechanisms that give rise to photodamage will enable microscopists to unlock the full potential of smart imaging. With photodamage-aware AI and automated tools, the goal of observing undisturbed physiological processes can be realised. This will profoundly enhance the capacity of fluorescence microscopy to uncover ground truths in biology. Challenges and future outlook In addition to determining the optimal deep learning approaches for various image-processing tasks, the success of AI-enhanced live microscopy depends on its ability to reliably extract quantifiable physiological information from the acquired image data. Thus, more rigorous validation methodologies and standardized quantitative strategies are still required to ensure both the biological fidelity of computationally restored images and the integrity of recovered signal intensities from techniques that artificially generate or enhance images (i.e. virtual microscopy imaging) (Belthangady and Royer, 2019; Laine et al., 2021; Lambert and Waters, 2023). For example, a better understanding of how intensity-based quantifications should be performed from virtually enhanced images is very much needed. Biological image data exhibit considerable variability from factors, such as sample physiology, protocols, instrumentation and even individual researchers. Unaware of sample behaviour Good spatial information Good time sampling Unaware of sample behaviour Good time sampling Structural artefacts Unaware of sample behaviour Good spatial information Poor time sampling Aware of sample behaviour Good spatial information Adaptive time sampling Risk to undersample events Aware of sample behaviour Adaptive spatial information Adaptive time sampling Risk to undersample events All of the above Event driven Temporal interpolation Image contrast enhancement Spatial resolution enhancement Virtual labelling Key event Illumination intensity Illumination intensity Illumination intensity Illumination intensity Illumination intensity Combined optimisation A B C D E AI-enhanced microscopy Monitored sample damage High intensity illumination Low intensity illumination Virtual interpolation Key Fig. 3. Prospective imaging with AI-enhanced live-cell microscopy. Shown here are examples of different acquisition frequencies and illumination intensities in a live-cell microscopy imaging experiment that is used to observe a particular key event. The gradient bar represents the estimated health sample damage during the acquisition. The length of the acquisition steps represents high (long blue bars) and low (short green bars) illumination intensities. (A) Uniform fast time-frequency with high illumination intensities allows for high spatial and temporal resolution at the expense of drastically damaging the sample and observing a key event under unhealthy conditions. (B) Uniform fast time-frequency with low illumination intensities allows for a gentle acquisition with high temporal resolution but with low SNRs or suboptimal spatial resolution. Using the deep learning approaches shown in Fig. 2, the image quality could be improved at the risk of generating structural artefacts. (C) Uniform slow time-frequency with high illumination intensities allows for high spatial resolution in a gentler manner but critical information about the event of interest might be missed. A deep learning-based temporal interpolator could partly recover the information obtained from the set-up in A. (D) Non-uniform event-driven acquisition in which the high-intensity illumination is triggered by the automatic identification of specific hallmarks in the field of view. This approach allows for a gentler adaptive sampling with good structural and temporal resolution, but can be biased towards the data used to train or codify event identification. (E) A combined optimisation of the above approaches that balances the health state of the imaged sample and the quality of the information at specific time points. The system uses image contrast enhancement or temporal interpolations to improve image quality during non-event acquisition, allowing a gentler acquisition. When an event of interest is anticipated, it automatically speeds up the acquisition and decides on appropriate illumination intensities. This preserves the information needed about the event of interest without drastically increasing the induced phototoxicity on the sample. 7 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- Therefore, establishing accurate ‘ground truth’ data is critical (Laine et al., 2021; Maška et al., 2023) given that deep learning model training depends heavily on input data quality. This encompasses factors ranging from the number of training images to their relevance for the intended analytical task. For example, defining the ideal sampling frequency to enable precise cell tracking requires determining the right balance between data acquisition and model performance. Although larger training datasets are thought to enhance model accuracy, strategies to effectively combine diverse datasets, while retaining the specificity of individual experimental conditions, remain to be developed. Publicly available annotated datasets are expanding (Caicedo et al., 2019; Ouyang et al., 2019; Maška et al., 2023), along with pre-trained models to facilitate transfer learning and fine-tuning, such as the Bioimage Model Zoo (Ouyang et al., 2022 preprint) and MONAI (Cardoso et al., 2022). However, best practices for assembling suitable training data and executing productive transfer learning must still be established, considering criteria such as data quality, image traits and analytical goals. Given that live-cell images are highly redundant, such optimization could maximize information extraction, while minimizing photodamage during acquisition. Unsupervised deep learning approaches learn and match data distributions even in highly heterogeneous or complex scenarios without the need for human descriptions or annotations. Thus, advancing generative models and unsupervised and/or self- supervised approaches that can effectively learn from unpaired data alone can provide flexibility when paired datasets are difficult to obtain experimentally (Box 2) and so contribute to unbiased observations. Moreover, such methods could be exploited to identify the events that deviate from the general distribution, i.e. to discover new biological patterns (Pinkard and Waller, 2022). Life scientists have extensive expertise in determining optimal parameters, including sampling frequencies, resolution and fields of view for microscopy experiments. Although these hand-tuned parameters might sometimes be suboptimal for subsequent computational analysis and quantification, they currently provide our best reference for what constitutes a ‘high quality’ image and for evaluating when obtained quantifications are accurate. One promising direction is to incorporate user knowledge and experience more directly into the image processing loop to help guide model performance towards more biologically relevant outputs, specific to each experiment. Recent advances, such as creating analytical representations of sparse or raw user inputs to generate priors, offer routes to achieve this. Priors refer to probability distributions that encode assumptions about the data that is going to be analysed. For example, indicating the location of the object to segment with a model. These priors constrain the solutions by reducing the space of possibilities. This general approach has already been proposed for segmenting natural images, such as in the segment anything model (SAM) (Kirillov et al., 2023). Overall, incorporating techniques to integrate human-based feedback as priors into the deep learning pipeline (i.e. the scientist-in-the-loop) represents an important step towards bringing AI-enhanced microscopy closer to matching human experience and intuition. Conclusion Fluorescence microscopy has become an indispensable tool for gaining unparalleled insights into biomolecular dynamics in cell biology. However, phototoxicity remains a major impediment that necessitates both deeper mechanistic understanding and new imaging techniques to mitigate these limitations. Although emerging synergies between microscopy hardware innovation and computational imaging show promise, standardised methodologies to comprehensively assess photodamage are still lacking. Recent advances in deep learning have made progress by enhancing information extraction from low-light or accelerated acquisitions, thereby reducing sample phototoxicity. However, more robust validation strategies are still required to ensure biological fidelity. In order to ensure the success of deep learning-enhanced microscopy, it is crucial to validate it through quantifiable image properties and sample physiology metrics. It is important to benchmark the key image characteristics, such as SNR, resolution limits and molecular content accuracy, against phototoxicity levels. At the same time, it is essential to establish sensitive biological measures that can detect even the slightest deviations from expected cellular behaviour caused by light exposure. Ideally, photodamage assessments should provide actionable and quantitative feedback on imaging protocols, enabling microscopists to optimize the balance between data quality and sample health. There is a significant opportunity to create universal metrics for photodamage that can account for the incremental effects of light on living samples. Non-invasive techniques such as label-free transmitted light imaging can be used to monitor gradual changes in morphology, metabolism or motility. Additionally, identifying molecular biomarkers of photostress that are accessible through gentle imaging can have a profound impact. It is crucial to have a quantitative damage- reporting system that can detect early warnings, rather than only overt cytotoxicity, and allow for real-time optimization during live acquisition. By incorporating such quantitative damage assessments into intelligent automated analysis workflows, microscopes can dynamically optimise imaging conditions for each specimen. Realising this will require converging advances across several domains: (1) improving biological knowledge of photodamage mechanisms, (2) advancing microscope hardware designs, (3) creating new computational imaging techniques such as deep learning, and (4) accurately interpreting model outputs. A remaining challenge is that deep learning model training requires extensive paired datasets that sufficiently encapsulate the inherent biological variability. Unsupervised learning alternatives provide flexibility but might compromise accuracy compared to supervised techniques. Incorporating biological expertise through techniques such as firstly, priors, which are probability distributions that encode assumptions to constrain solutions, and, secondly, prompts, which provide contextual guidance for generative models, appears promising for guiding model training. Additionally, the field needs empirically driven strategies to optimise model training and validation protocols. As AI continues to improve imaging capabilities, it is important to remember that the ultimate goal is not just to recover data, but to uncover biological truths. To achieve this, we must prioritise minimising photodamage over pushing technical limits and over relying on computational fixes. Simply relying on technology is not enough, instead the focus should be on maximising information with minimum invasiveness. We must approach innovation with the mindset that it should serve to observe life with minimal perturbance. Therefore, the principles of gentle acquisition and relevant observation of living systems should be the driving force behind future innovations. Competing interests The authors declare no competing or financial interests. Funding Our work in this area is supported by the support of the Gulbenkian Foundation (Fundação Calouste Gulbenkian), the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant 8 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- agreement no. 101001332 to R.H.) and the European Union through the Horizon Europe program (AI4LIFE project with grant agreement 101057970-AI4LIFE, and RT-SuperES project with grant agreement 101099654-RT-SuperES to R.H.). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. Our work was also supported by the European Molecular Biology Organization (EMBO) Installation Grant (EMBO-2020-IG-4734 to R.H.), an EMBO Postdoctoral Fellowship (EMBO ALTF 174-2022 to E.G.M.), the Chan Zuckerberg Initiative Visual Proteomics Grant (vpi-0000000044 with doi:10. 37921/743590vtudfp to R.H.) and the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundations (Chan Zuckerberg Initiative Napary Plugin Foundations Grant Cycle 2, NP2-0000000085 granted to R.H.). R.H. also acknowledges the support of LS4FUTURE Associated Laboratory (LA/P/0087/ 2020). This study was supported by the Academy of Finland (338537 to G.J.), the Sigrid Juselius Foundation (to G.J.), the Cancer Society of Finland (Syö pä jä rjestö t; to G.J.), and the Solutions for Health strategic funding to Åbo Akademi University (to G.J.). This research was supported by InFLAMES Flagship Programme of the Academy of Finland (decision number: 337531). Open Access funding provided by University College London. Deposited in PMC for immediate release. References Alam, S. R., Wallrabe, H., Christopher, K. G., Siller, K. H. and Periasamy, A. (2022). Characterization of mitochondrial dysfunction due to laser damage by 2-photon FLIM microscopy. Sci. Rep. 12, 11938. doi:10.1038/s41598-022- 15639-z Alghamdi, R. A., Exposito-Rodriguez, M., Mullineaux, P. M., Brooke, G. N. and Laissue, P. P. (2021). Assessing phototoxicity in a mammalian cell line: how low levels of blue light affect motility in PC3 cells. Front. Cell Dev. Biol. 9, 738786. Alvelid, J., Damenti, M., Sgattoni, C. and Testa, I. (2022). Event-triggered STED imaging. Nat. Methods 19, 1268-1275. doi:10.1038/s41592-022-01588-y André, O., Ahnlide, J. K., Norlin, N., Swaminathan, V. and Nordenfelt, P. (2023). Data-driven microscopy allows for automated context-specific acquisition of high- fidelity image data. Cell Reports Methods 3, 100419. doi:10.1016/j.crmeth.2023. 100419 Belthangady, C. and Royer, L. A. (2019). Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215-1225. doi:10.1038/s41592-019-0458-z Betzig, E., Patterson, G. H., Sougrat, R., Lindwasser, O. W., Olenych, S., Bonifacino, J. S., Davidson, M. W., Lippincott-Schwartz, J. and Hess, H. F. (2006). Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642-1645. doi:10.1126/science.1127344 Bilodeau, A., Delmas, C. V. L., Parent, M., Koninck, P. D., Durand, A. and Lavoie- Cardinal, F. (2022). Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. Nat. Mach. Intell. 4, 455-466. doi:10.1038/s42256-022-00472-w Blom, H. and Brismar, H. (2014). STED microscopy: increased resolution for medical research? J. Intern. Med. 276, 560-578. doi:10.1111/joim.12278 Bouchard, C., Wiesner, T., Deschênes, A., Bilodeau, A., Turcotte, B., Gagné, C. and Lavoie-Cardinal, F. (2023). Resolution enhancement with a task-assisted GAN to guide optical nanoscopy image analysis and acquisition. Nat. Mach. Intell. 5, 830-844. doi:10.1038/s42256-023-00689-3 Caicedo, J. C., Cooper, S., Heigwer, F., Warchal, S., Qiu, P., Molnar, C., Vasilevich, A. S., Barry, J. D., Bansal, H. S., Kraus, O. et al. (2017). Data- analysis strategies for image-based cell profiling. Nat. Methods 14, 849-863. doi:10.1038/nmeth.4397 Caicedo, J. C., Goodman, A., Karhohs, K. W., Cimini, B. A., Ackerman, J., Haghighi, M., Heng, C., Becker, T., Doan, M., McQuin, C. et al. (2019). Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat. Methods 16, 1247-1253. doi:10.1038/s41592-019-0612-7 Cardoso, M. J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., Murrey, B., Myronenko, A., Zhao, C., Yang, D. et al. (2022) MONAI: An open-source framework for deep learning in healthcare. Chandrasekaran, S. N., Ceulemans, H., Boyd, J. D. and Carpenter, A. E. (2021). Image-based profiling for drug discovery: due for a machine-learning upgrade? Nat. Rev. Drug Discov. 20, 145-159. doi:10.1038/s41573-020-00117-w Chen, B.-C., Legant, W. R., Wang, K., Shao, L., Milkie, D. E., Davidson, M. W., Janetopoulos, C., Wu, X. S., Hammer, J. A., Liu, Z. et al. (2014). Lattice light- sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998. doi:10.1126/science.1257998 Chen, S.-Y., Bestvater, F., Schaufler, W., Heintzmann, R. and Cremer, C. (2018). Patterned illumination single molecule localization microscopy (piSMLM): user defined blinking regions of interest. Opt. Express 26, 30009-30020. doi:10.1364/ OE.26.030009 Chen, J., Sasaki, H., Lai, H., Su, Y., Liu, J., Wu, Y., Zhovmer, A., Combs, C. A., Rey-Suarez, I., Chang, H.-Y. et al. (2021). Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678-687. doi:10.1038/s41592-021-01155-x Chiron, L., Bec, M. L., Cordier, C., Pouzet, S., Milunov, D., Banderas, A., Meglio, J.-M. D., Sorre, B. and Hersen, P. (2022). CyberSco.Py an open-source software for event-based, conditional microscopy. Sci. Rep. 12, 11579. doi:10.1038/ s41598-022-15207-5 Christiansen, E. M., Yang, S. J., Ando, D. M., Javaherian, A., Skibinski, G., Lipnick, S., Mount, E., O’Neil, A., Shah, K., Lee, A. K. et al. (2018). In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792-803.e19. doi:10.1016/j.cell.2018.03.040 Culley, S., Albrecht, D., Jacobs, C., Pereira, P. M., Leterrier, C., Mercer, J. and Henriques, R. (2018). Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263-266. doi:10.1038/nmeth.4605 de Boer, P., Hoogenboom, J. P. and Giepmans, B. N. G. (2015). Correlated light and electron microscopy: ultrastructure lights up!. Nat. Methods 12, 503-513. doi:10.1038/nmeth.3400 Demchenko, A. P. (2020). Photobleaching of organic fluorophores: quantitative characterization, mechanisms, protection. Methods Appl. Fluoresc. 8, 022001. doi:10.1088/2050-6120/ab7365 Dertinger, T., Colyer, R., Iyer, G., Weiss, S. and Enderlein, J. (2009). Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI). Proc. Natl Acad. Sci. USA 106, 22287-22292. doi:10.1073/pnas.0907866106 Doron, M., Moutakanni, T., Chen, Z. S., Moshkov, N., Caron, M., Touvron, H., Bojanowski, P., Pernice, W. M. and Caicedo, J. C. (2023). Unbiased single-cell morphology with self-supervised vision transformers. bioRxiv. doi:10.1101/2023. 06.16.545359 Dodt, H.-U., Leischner, U., Schierloh, A., Jä hrling, N., Mauch, C. P., Deininger, K., Deussing, J. M., Eder, M., Zieglgä nsberger, W. and Becker, K. (2007). Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain. Nat. Methods 4, 331-336. doi:10.1038/nmeth1036 Ebrahimi, V., Stephan, T., Kim, J., Carravilla, P., Eggeling, C., Jakobs, S. and Han, K. Y. (2023). Deep learning enables fast, gentle STED microscopy. Commun. Biol. 6, 674. doi:10.1038/s42003-023-05054-z Eichler, M., Lavi, R., Shainberg, A. and Lubart, R. (2005). Flavins are source of visible-light-induced free radical formation in cells. Lasers Surg. Med. 37, 314-319. doi:10.1002/lsm.20239 Fang, L., Monroe, F., Novak, S. W., Kirk, L., Schiavon, C. R., Yu, S. B., Zhang, T., Wu, M., Kastner, K., Latif, A. A. et al. (2021). Deep learning-based point- scanning super-resolution imaging. Nat. Methods 18, 406-416. doi:10.1038/ s41592-021-01080-z Fox, Z. R., Fletcher, S., Fraisse, A., Aditya, C., Sosa-Carrillo, S., Petit, J., Gilles, S., Bertaux, F., Ruess, J. and Batt, G. (2022). Enabling reactive microscopy with MicroMator. Nat. Commun. 13, 2199. doi:10.1038/s41467-022-29888-z Gómez-de-Mariscal, E., Garcı́a-López-de-Haro, C., Ouyang, W., Donati, L., Lundberg, E., Unser, M., Muñoz-Barrutia, A. and Sage, D. (2021). DeepImageJ: a user-friendly environment to run deep learning models in ImageJ. Nat. Methods 18, 1192-1195. doi:10.1038/s41592-021-01262-9 Grotjohann, T., Testa, I., Leutenegger, M., Bock, H., Urban, N. T., Lavoie- Cardinal, F., Willig, K. I., Eggeling, C., Jakobs, S. and Hell, S. W. (2011). Diffraction-unlimited all-optical imaging and writing with a photochromic GFP. Nature 478, 204-208. doi:10.1038/nature10497 Guo, M., Li, Y., Su, Y., Lambert, T., Nogare, D. D., Moyle, M. W., Duncan, L. H., Ikegami, R., Santella, A., Rey-Suarez, I. et al. (2020). Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337-1346. doi:10.1038/s41587-020-0560-x Gustafsson, M. G. L. (2000). Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. Short communication. J. Microsc. 198, 82-87. doi:10.1046/j.1365-2818.2000.00710.x Gustafsson, N., Culley, S., Ashdown, G., Owen, D. M., Pereira, P. M. and Henriques, R. (2016). Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations. Nat. Commun. 7, 12471. doi:10.1038/ncomms12471 Harada, T., Hata, S., Fukuyama, M., Chinen, T. and Kitagawa, D. (2022). An antioxidant screen identifies ascorbic acid for prevention of light-induced mitotic prolongation in live cell imaging. Cell Biol. 6, 1107. doi:10.1101/2022.06.20. 496814 Heimstä dt, O. (1911). Das fluoreszenzmikroskop. Z Wiss Mikrosk 28, 330-337. Heintzmann, R. and Huser, T. (2017). Super-resolution structured illumination microscopy. Chem. Rev. 117, 13890-13908. doi:10.1021/acs.chemrev.7b00218 Hell, S. W. and Wichmann, J. (1994). Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett. 19, 780. doi:10.1364/OL.19.000780 Hess, S. T., Girirajan, T. P. K. and Mason, M. D. (2006). Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 91, 4258-4272. doi:10.1529/biophysj.106.091116 Hockberger, P. E., Skimina, T. A., Centonze, V. E., Lavin, C., Chu, S., Dadras, S., Reddy, J. K. and White, J. G. (1999). Activation of flavin-containing oxidases underlies light-induced production of H2O2 in mammalian cells. Proc. Natl Acad. Sci. USA 96, 6255-6260. doi:10.1073/pnas.96.11.6255 Hofmann, M., Eggeling, C., Jakobs, S. and Hell, S. W. (2005). Breaking the diffraction barrier in fluorescence microscopy at low light intensities by using 9 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- reversibly photoswitchable proteins. Proc. Natl Acad. Sci. USA 102, 17565-17569. doi:10.1073/pnas.0506010102 Hollandi, R., Szkalisity, A., Toth, T., Tasnadi, E., Molnar, C., Mathe, B., Grexa, I., Molnar, J., Balind, A., Gorbe, M. et al. (2020). nucleAIzer: a parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst. 10, 453-458.e6. doi:10.1016/J.CELS.2020.04.003 Huff, J. (2015). The Airyscan detector from Zeiss: confocal imaging with improved signal-to-noise ratio and super-resolution. Nat. Methods 12, i-ii. doi:10.1038/ nmeth.f.388 Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. and Stelzer, E. H. K. (2004). Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007-1009. doi:10.1126/science.1100035 Hutson, M. (2018). Artificial intelligence faces reproducibility crisis. Science 359, 725-726. doi:10.1126/science.359.6377.725 Icha, J., Weber, M., Waters, J. C. and Norden, C. (2017). Phototoxicity in live fluorescence microscopy, and how to avoid it. BioEssays 39, 1700003. doi:10. 1002/bies.201700003 Jacquemet, G., Carisey, A. F., Hamidi, H., Henriques, R. and Leterrier, C. (2020). The cell biologist’s guide to super-resolution microscopy. J. Cell Sci. 133, jcs240713. doi:10.1242/JCS.240713 Jiang, Y., Sha, H., Liu, S., Qin, P. and Zhang, Y. (2023). AutoUnmix: an autoencoder-based spectral unmixing method for multi-color fluorescence microscopy imaging. bioRxiv 2023.05.30.542836. doi:10.1101/2023.05.30. 542836 Jin, L., Liu, B., Zhao, F., Hahn, S., Dong, B., Song, R., Elston, T. C., Xu, Y. and Hahn, K. M. (2020). Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nat. Commun. 11, 1934. doi:10.1038/ s41467-020-15784-x Kesari, K. K., Dhasmana, A., Shandilya, S., Prabhakar, N., Shaukat, A., Dou, J., Rosenholm, J. M., Vuorinen, T. and Ruokolainen, J. (2020). Plant-derived natural biomolecule picein attenuates menadione induced oxidative stress on neuroblastoma cell mitochondria. Antioxidants 9, 552. doi:10.3390/ antiox9060552 Kiepas, A., Voorand, E., Mubaid, F., Siegel, P. M. and Brown, C. M. (2020). Optimizing live-cell fluorescence imaging conditions to minimize phototoxicity. J. Cell Sci. 133, jcs242834. doi:10.1242/jcs.242834 Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y. et al. (2023). Segment Anything. Klar, T. A. and Hell, S. W. (1999). Subdiffraction resolution in far-field fluorescence microscopy. Opt. Lett. 24, 954-956. doi:10.1364/OL.24.000954 Krull, A., Buchholz, T.-O. and Jug, F. (2019). Noise2Void-Learning Denoising from Single Noisy Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Krull, A., Vič ar, T., Prakash, M., Lalit, M. and Jug, F. (2020). Probabilistic Noise2Void: unsupervised content-aware denoising. Front. Comput. Sci. 2, 5. doi:10.3389/fcomp.2020.00005 Kunkel, M., Schildknecht, S., Boldt, K., Zeyffert, L., Schleheck, D., Leist, M. and Polarz, S. (2018). Increasing the resistance of living cells against oxidative stress by nonnatural surfactants as membrane guards. ACS Appl. Mater. Interfaces 10, 23638-23646. doi:10.1021/acsami.8b07032 Kwakwa, K., Savell, A., Davies, T., Munro, I., Parrinello, S., Purbhoo, M. A., Dunsby, C., Neil, M. A. A. and French, P. M. W. (2016). easySTORM: a robust, lower-cost approach to localisation and TIRF microscopy. J. Biophotonics 9, 948-957. doi:10.1002/jbio.201500324 Laine, R. F., Arganda-Carreras, I., Henriques, R. and Jacquemet, G. (2021). Avoiding a replication crisis in deep-learning-based bioimage analysis. Nat. Methods 18, 1136-1144. doi:10.1038/s41592-021-01284-3 Laine, R. F., Heil, H. S., Coelho, S., Nixon-Abell, J., Jimenez, A., Wiesner, T., Martı́nez, D., Galgani, T., Régnier, L., Stubb, A. et al. (2023). High-fidelity 3D live-cell nanoscopy through data-driven enhanced super-resolution radial fluctuation. Nat. Methods 20, 1949-1956. doi:10.1038/s41592-023-02057-w Laissue, P. P., Alghamdi, R. A., Tomancak, P., Reynaud, E. G. and Shroff, H. (2017). Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657-661. doi:10.1038/nmeth.4344 Lambert, T. and Waters, J. (2023). Towards effective adoption of novel image analysis methods. Nat. Methods 20, 971-972. doi:10.1038/s41592-023-01910-2 Lehmann, H. (1913). Das Luminszenz-Mikroskop: seine Grundlagen und seine Anwendungen. Lelek, M., Gyparaki, M. T., Beliu, G., Schueder, F., Griffié, J., Manley, S., Jungmann, R., Sauer, M., Lakadamyali, M. and Zimmer, C. (2021). Single- molecule localization microscopy. Nat. Rev. Methods Primers 1, 39. doi:10.1038/ s43586-021-00038-x Li, D., Shao, L., Chen, B.-C., Zhang, X., Zhang, M., Moses, B., Milkie, D. E., Beach, J. R., Hammer, J. A., Pasham, M. et al. (2015). Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349, aab3500. doi:10.1126/science.aab3500 Li, H., Matsunaga, D., Matsui, T. S., Aosaki, H., Kinoshita, G., Inoue, K., Doostmohammadi, A. and Deguchi, S. (2022a). Wrinkle force microscopy: a machine learning based approach to predict cell mechanics from images. Commun. Biol. 5, 361. doi:10.1038/s42003-022-03288-x Li, Y., Su, Y., Guo, M., Han, X., Liu, J., Vishwasrao, H. D., Li, X., Christensen, R., Sengupta, T., Moyle, M. W. et al. (2022b). Incorporating the image formation process into deep learning improves network performance. Nat. Methods 19, 1427-1437. doi:10.1038/s41592-022-01652-7 Li, X., Wu, Y., Su, Y., Rey-Suarez, I., Matthaeus, C., Updegrove, T. B., Wei, Z., Zhang, L., Sasaki, H., Li, Y. et al. (2023). Three-dimensional structured illumination microscopy with enhanced axial resolution. Nat. Biotechnol. 41, 1307-1319. doi:10.1038/s41587-022-01651-1 Ludvikova, L., Simon, E., Deygas, M., Panier, T., Plamont, M.-A., Ollion, J., Tebo, A., Piel, M., Jullien, L., Robert, L. et al. (2023). Near-infrared co- illumination of fluorescent proteins reduces photobleaching and phototoxicity. Nat. Biotechnol. doi:10.1038/s41587-023-01893-7 Macke, J. H., Ries, J., Turaga, S. C., Speiser, A., Mü ller, L.-R., Hoess, P., Matti, U., Obara, C. J., Legant, W. R. and Kreshuk, A. (2021). Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18, 1082-1090. doi:10.1038/s41592-021-01236-x Mahecic, D., Stepp, W. L., Zhang, C., Griffié, J., Weigert, M. and Manley, S. (2022). Event-driven acquisition for content-enriched microscopy. Nat. Methods 19, 1262-1267. doi:10.1038/s41592-022-01589-x Maioli, V., Boniface, A., Mahou, P., Ortas, J. F., Abdeladim, L., Beaurepaire, E. and Supatto, W. (2020). Fast in vivo multiphoton light-sheet microscopy with optimal pulse frequency. Biomed. Opt. Express 11, 6012-6026. doi:10.1364/BOE. 400113 Maš ka, M., Ulman, V., Delgado-Rodriguez, P., Gómez-de-Mariscal, E., Neč asová, T., Peña, F. A. G., Ren, T. I., Meyerowitz, E. M., Scherr, T., Lö ffler, K. et al. (2023). The cell tracking challenge: 10 years of objective benchmarking. Nat. Methods 20, 1010-1020. doi:10.1038/s41592-023-01879-y McAleer, S., Fast, A., Xue, Y., Seiler, M. J., Tang, W. C., Balu, M., Baldi, P. and Browne, A. W. (2021). Deep learning–assisted multiphoton microscopy to reduce light exposure and expedite imaging in tissues with high and low light sensitivity. Transl. Vis. Sci. Technol. 10, 30. doi:10.1167/tvst.10.12.30 McDonald, A., Harris, J., MacMillan, D., Dempster, J. and McConnell, G. (2012). Light-induced Ca2+ transients observed in widefield epi-fluorescence microscopy of excitable cells. Biomed. Opt. Express 3, 1266-1273. doi:10.1364/BOE.3. 001266 McRae, T. D., Oleksyn, D., Miller, J. and Gao, Y. R. (2019). Robust blind spectral unmixing for fluorescence microscopy using unsupervised learning. PLoS ONE 14, e0225410. doi:10.1371/JOURNAL.PONE.0225410 Meijering, E. (2020). A bird’s-eye view of deep learning in bioimage analysis. Comput. Struct. Biotechnol. J. 18, 2312-2325. doi:10.1016/j.csbj.2020.08.003 Melanthota, S. K., Gopal, D., Chakrabarti, S., Kashyap, A. A., Radhakrishnan, R. and Mazumder, N. (2022). Deep learning-based image processing in optical microscopy. Biophys. Rev. 14, 463-481. doi:10.1007/S12551-022-00949-3/ FIGURES/8 Moen, E., Bannon, D., Kudo, T., Graf, W., Covert, M. and Valen, D. V. (2019). Deep learning for cellular image analysis. Nat. Methods 16, 1233-1246. doi:10. 1038/s41592-019-0403-1 Mubaid, F. and Brown, C. M. (2017). Less is more: longer exposure times with low light intensity is less photo-toxic. Microscopy Today 25, 26-35. doi:10.1017/ S1551929517000980 Nehme, E., Weiss, L. E., Michaeli, T. and Shechtman, Y. (2018). Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458. doi:10.1364/OPTICA.5.000458 Oh, H.-J. and Jeong, W.-K. (2023). DiffMix: Diffusion Model-based Data Synthesis for Nuclei Segmentation and Classification in Imbalanced Pathology Image Datasets. Ounkomol, C., Seshamani, S., Maleckar, M. M., Collman, F. and Johnson, G. R. (2018). Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917-920. doi:10.1038/s41592- 018-0111-2 Ouyang, W., Aristov, A., Lelek, M., Hao, X. and Zimmer, C. (2018). Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460-468. doi:10.1038/nbt.4106 Ouyang, W., Winsnes, C. F., Hjelmare, M., Cesnik, A. J., Åkesson, L., Xu, H., Sullivan, D. P., Dai, S., Lan, J., Jinmo, P. et al. (2019). Analysis of the Human Protein Atlas Image Classification competition. Nat. Methods 16, 1254-1261. doi:10.1038/s41592-019-0658-6 Ouyang, W., Beuttenmueller, F., Gómez-De-Mariscal, E., Pape, C., Burke, T., Garcia-López-De-Haro, C., Russell, C., Moya-Sans, L., De-La-Torre- Gutiérrez, C., Schmidt, D. et al. (2022). BioImage model zoo: a community- driven resource for accessible deep learning in bioimage analysis. bioRxiv 2022.06.07.495102. doi:10.1101/2022.06.07.495102 Park, H., Na, M., Kim, B., Park, S., Kim, K. H., Chang, S. and Ye, J. C. (2022). Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy. Nat. Commun. 13, 3297. doi:10.1038/s41467-022- 30949-6 Pinkard, H. and Waller, L. (2022). Microscopes are coming for your job. Nat. Methods 19, 1175-1176. doi:10.1038/s41592-022-01566-4 Priessner, M., Gaboriau, D. C. A., Sheridan, A., Lenn, T., Chubb, J. R., Manor, U., Vilar, R. and Laine, R. F. (2021). Content-aware frame interpolation (CAFI): 10 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science ----!@#$NewPage!@#$---- Deep Learning-based temporal super-resolution for fast bioimaging. bioRxiv 2021. doi:10.1101/2021.11.02.466664 Pylvä nä inen, J. W., Gómez-de-Mariscal, E., Henriques, R. and Jacquemet, G. (2023). Live-cell imaging in the deep learning era. Curr. Opin. Cell Biol. 85, 102271. doi:10.1016/j.ceb.2023.102271 Qiao, C., Li, D., Guo, Y., Liu, C., Jiang, T., Dai, Q. and Li, D. (2021). Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194-202. doi:10.1038/s41592-020-01048-5 Qiao, C., Li, D., Liu, Y., Zhang, S., Liu, K., Liu, C., Guo, Y., Jiang, T., Fang, C., Li, N. et al. (2022). Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat. Biotechnol. 2022, 1-11. doi:10.1038/s41587-022-01471-3 Ratz, M., Testa, I., Hell, S. W. and Jakobs, S. (2015). CRISPR/Cas9-mediated endogenous protein tagging for RESOLFT super-resolution microscopy of living human cells. Sci. Rep. 5, 9592. doi:10.1038/srep09592 Reiche, M. A., Aaron, J. S., Boehm, U., DeSantis, M. C., Hobson, C. M., Khuon, S., Lee, R. M. and Chew, T.-L. (2022). When light meets biology – how the specimen affects quantitative microscopy. J. Cell Sci. 135, jcs259656. doi:10. 1242/jcs.259656 Reichert, K. (1911). Das Fluoreszenzmikroskop. Physik Zeits 12, 1010. Reynaud, E. G., Krž ič , U., Greger, K. and Stelzer, E. H. K. (2008). Light sheet– based fluorescence microscopy: More dimensions, more photons, and less photodamage. HFSP J. 2, 266-275. doi:10.2976/1.2974980 Richmond, D., Jost, A. P.-T., Lambert, T., Waters, J. and Elliott, H. (2017). DeadNet: identifying phototoxicity from label-free microscopy images of cells using deep ConvNets. doi:10.48550/arXiv.1701.06109 Sage, D., Pham, T.-A., Babcock, H., Lukes, T., Pengo, T., Chao, J., Velmurugan, R., Herbert, A., Agrawal, A., Colabrese, S. et al. (2019). Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software. Nat. Methods 16, 387-395. doi:10.1038/s41592-019-0364-4 Saguy, A., Alalouf, O., Opatovski, N., Jang, S., Heilemann, M. and Shechtman, Y. (2023). DBlink: dynamic localization microscopy in super spatiotemporal resolution via deep learning. Nat. Methods 20, 1939-1948. doi:10.1038/s41592- 023-01966-0 Saxena, M., Eluru, G. and Gorthi, S. S. (2015). Structured illumination microscopy. Adv. Opt. Photon 7, 241-275. doi:10.1364/AOP.7.000241 Scherf, N. and Huisken, J. (2015). The smart and gentle microscope. Nat. Biotechnol. 33, 815-818. doi:10.1038/nbt.3310 Schermelleh, L., Ferrand, A., Huser, T., Eggeling, C., Sauer, M., Biehlmaier, O. and Drummen, G. P. C. (2019). Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72-84. doi:10.1038/s41556-018-0251-8 Spahn, C., Gómez-de-Mariscal, E., Laine, R. F., Pereira, P. M., von Chamier, L., Conduit, M., Pinho, M. G., Jacquemet, G., Holden, S., Heilemann, M. et al. (2022). DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun. Biol. 5, 688. doi:10.1038/s42003-022-03634-z Stevenson, D. J., Lake, T. K., Agate, B., Garcés-Chávez, V., Dholakia, K. and Gunn-Moore, F. (2006). Optically guided neuronal growth at near infrared wavelengths. Opt. Express 14, 9786-9793. doi:10.1364/OE.14.009786 Suzuki, H., May-Maw-Thet, Hatta-Ohashi, Y., Akiyoshi, R., Hayashi, T., Suzuki, H., May-Maw-Thet, Hatta-Ohashi, Y., Akiyoshi, R. and Hayashi, T. (2016). Bioluminescence microscopy: design and applications. In Luminescence - An Outlook on the Phenomena and Their Applications. IntechOpen. doi:10.5772/ 65048 Tian, L., Hunt, B., Bell, M. A. L., Yi, J., Smith, J. T., Ochoa, M., Intes, X. and Durr, N. J. (2021). Deep learning in biomedical optics. Lasers Surg. Med. 53, 748-775. doi:10.1002/lsm.23414 Tinevez, J.-Y., Dragavon, J., Baba-Aissa, L., Roux, P., Perret, E., Canivet, A., Galy, V. and Shorte, S. (2012). A quantitative method for measuring phototoxicity of a live cell imaging microscope. In Methods in Enzymology, pp. 291-309. Elsevier. Tosheva, K. L., Yuan, Y., Pereira, P. M., Culley, S. and Henriques, R. (2020). Between life and death: strategies to reduce phototoxicity in super-resolution microscopy. J. Phys. D Appl. Phys. 53, 163001. doi:10.1088/1361-6463/ab6b95 Verveer, P. J., Swoger, J., Pampaloni, F., Greger, K., Marcello, M. and Stelzer, E. H. K. (2007). High-resolution three-dimensional imaging of large specimens with light sheet–based microscopy. Nat. Methods 4, 311-313. doi:10.1038/ nmeth1017 von Chamier, L., Laine, R. F., Jukkala, J., Spahn, C., Krentzel, D., Nehme, E., Lerche, M., Hernández-Pérez, S., Mattila, P. K., Karinou, E. et al. (2021). Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat. Commun. 12, 2276. doi:10.1038/s41467-021-22518-0 Wagner, N., Beuttenmueller, F., Norlin, N., Gierten, J., Boffi, J. C., Wittbrodt, J., Weigert, M., Hufnagel, L., Prevedel, R. and Kreshuk, A. (2021). Deep learning- enhanced light-field imaging with continuous validation. Nat. Methods 18, 557-563. doi:10.1038/s41592-021-01136-0 Wä ldchen, S., Lehmann, J., Klein, T., van de Linde, S. and Sauer, M. (2015). Light-induced cell damage in live-cell super-resolution microscopy. Sci. Rep. 5, 15348. doi:10.1038/srep15348 Wang, H., Rivenson, Y., Jin, Y., Wei, Z., Gao, R., Gü naydın, H., Bentolila, L. A., Kural, C. and Ozcan, A. (2019). Deep learning enables cross-modality super- resolution in fluorescence microscopy. Nat. Methods 16, 103-110. doi:10.1038/ s41592-018-0239-0 Weigert, M., Schmidt, U., Boothe, T., Mü ller, A., Dibrov, A., Jain, A., Wilhelm, B., Schmidt, D., Broaddus, C., Culley, S. et al. (2018). Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090-1097. doi:10.1038/s41592-018-0216-7 Wildanger, D., Rittweger, E., Kastrup, L. and Hell, S. W. (2008). STED microscopy with a supercontinuum laser source. Opt. Express 16, 9614-9621. doi:10.1364/OE.16.009614 Wu, P.-H., Gilkes, D. M., Phillip, J. M., Narkar, A., Cheng, T. W.-T., Marchand, J., Lee, M.-H., Li, R. and Wirtz, D. (2020). Single-cell morphology encodes metastatic potential. Sci. Adv. 6, eaaw6938. doi:10.1126/sciadv.aaw6938 Xu, Y. K . T., Graves, A. R., Coste, G. I., Huganir, R. L., Bergles, D. E., Charles, A. S. and Sulam, J. (2023). Cross-modality supervised image restoration enables nanoscale tracking of synaptic plasticity in living mice. Nat. Methods 20, 935-944. doi:10.1038/s41592-023-01871-6 Xue, M.-Q., Zhu, X.-L., Wang, G. and Xu, Y.-Y. (2022). DULoc: quantitatively unmixing protein subcellular location patterns in immunofluorescence images based on deep learning features. Bioinformatics 38, 827-833. doi:10.1093/ bioinformatics/btab730 Zhang, Q., Chen, J., Li, J., Bo, E., Jiang, H., Lu, X., Zhong, L. and Tian, J. (2022a). Deep learning-based single-shot structured illumination microscopy. Opt. Lasers Eng. 155, 107066. doi:10.1016/j.optlaseng.2022.107066 Zhang, X., Dorlhiac, G., Landry, M. P. and Streets, A. (2022b). Phototoxic effects of nonlinear optical microscopy on cell cycle, oxidative states, and gene expression. Sci. Rep. 12, 18796. doi:10.1038/s41598-022-23054-7 11 OPINION Journal of Cell Science (2024) 137, jcs261545. doi:10.1242/jcs.261545 Journal of Cell Science