Therefore, there is an interest in extracting the same information using cheaper, noninvasive methods. 7 All these drawbacks aggravate, or hinder completely, the collection of reliable and long-term longitudinal data on the same population, such as when studying cell behavior or drug uptake over time. Also, care must be taken when choosing multiple dye partners to avoid spectral bleed-through. Fourth, fluorescence staining techniques are often expensive, time consuming and labor intensive, as they may require long protocol optimizations (e.g., dye concentration, incubation, and washing times must be optimized for each cell type and dye). 6 Furthermore, for some dyes a cell-permeable form enters a cell and then reacts to form a stable and impermeable reaction product that is transferred to daughter cells as a consequence, the dye intensity dilutes at every cell division and is eventually lost. 5 Third, phototoxicity and photobleaching can also occur while acquiring the fluorescence images, which results in a tradeoff between data quality, time scales available for live-cell imaging (duration and speed), and cell health. Second, the staining of the cell structures is typically achieved by adding chemical fluorescence dyes to a cell sample, which is an invasive (due to the required culture media exchange and dye uptake 4) and sometimes even toxic process. ![]() Besides the complexity of the optical setup, usually only one dye is imaged at each specific wavelength, limiting the combination of dyes and cell structures that can be imaged in a single experiment. First, it requires a fluorescence microscope equipped with appropriate filters that match the spectral profiles of the dyes. ![]() However, fluorescence cell imaging has significant drawbacks. To make this deep-learning–powered approach readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific virtual-staining and cell-profiling applications. Generating virtually stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell. Subsequently, we use these virtually stained images to extract quantitative measures about these cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell–derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. We show that this is a robust and fast-converging approach to generate virtually stained images from the bright-field images and, in subsequent downstream analyses, to quantify the properties of cell structures. Here, we introduce an alternative deep-learning–powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time consuming, labor intensive, and expensive. ![]() The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research.
0 Comments
Leave a Reply. |