Ceiling Spot Lights - spot lights led
Lee, D. D. & Seung, H. S. Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999).
Intentionally, this simple model does not include any properties of optics, sophisticated models of light-matter interactions, and spatial components (focus, sample surface, etc.). The aim of the method is to describe an observed object in the best way, with minimal assumptions on its nature or features.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
where C is the number of channels, w is the number of discrete wavelengths, \(G_{mn}\) is a measure of discontinuousness between pixels m and n. The \({\mathbb {N}}_m\) is a set of points, which have the Euclidean distance to the pixel m equal or less than \({\mathcal {T}}_{ED}\). Authors used \({\mathcal {T}}_{ED}=1\), but a larger neighbourhood may improve convergence speed. The integral in the first part of Eq. (4) is supposed to be solved numerically. Authors used the Simpson integration method15 with discretization \(||\lambda _i||= 48\).
The unique optical design used in light sheet fluorescence microscopy (LSFM) (aka SPIM ) allows for imaging of very thick specimens at much greater speeds than other fluorescence microscopy methods. Recent advances in tissue clearing methods have made it possible to perform such imaging on samples that are naturally opaque. Tissues such as whole rodent brains can be rendered clear, labelled with fluorescent markers or antibodies, and imaged intact. As opposed to sectioning, all relational and positional information is retained, making it possible to see the three-dimensional relationships of cells and structures within the tissues. Imaging a cleared mouse hemi-brain can be performed in about an hour, many times faster than confocal microscopy.
A L929 (mouse fibroblast, Sigma-Aldrich, cat. No. 85011425) cell line was grown at low optical density overnight at 37 °C, 5% \({\text {CO}}_2\), and 90% RH. The nutrient solution consisted of DMEM (87.7%) with high glucose (\({>}1 \, {\hbox {g L}}^{-1}\)), fetal bovine serum (10%), antibiotics and antimycotics (1%), L-glutamine (1%), and gentamicin (0.3%; all purchased from Biowest, Nuaillé, France).
The sample has to obey three assumptions: localized gradients, reasonable flatness, and linear response. If these assumptions hold, the obtained results will be in agreement with physical properties of the medium. Most of relatively flat biological samples (e.g., a single layer of cells) fulfil all these criteria. In order to show the capacity of the method, we used it for analysis of images of unstained live L929 mouse fibroblasts recorded using a video-enhanced bright-field wide-field light microscope in time lapse and with through-focusing. For determination of the best focal position in the z-stack, we used the graylevel local variance23. The effective light spectrum as the result of multiplication of the light source spectrum by the camera filter transparency curves is shown in Fig. 4b. The original raw image is shown in Fig. 4a and looks greenish due to the prevalence of green colour in incoming light spectrum.
Dahlberg, P. D. et al. A simple approach to spectrally resolved fluorescence and bright field microscopy over select regions of interest. Rev. Sci. Instrum. 87, 113704 (2016).
Investigation of cell structure is hardly imaginable without bright-field microscopy. Numerous modifications such as depth-wise scanning or videoenhancement make this method being state-of-the-art. This raises a question what maximal information can be extracted from ordinary (but well acquired) bright-field images in a model-free way. Here we introduce a method of a physically correct extraction of features for each pixel when these features resemble a transparency spectrum. The method is compatible with existent ordinary bright-field microscopes and requires mathematically sophisticated data processing. Unsupervised clustering of the spectra yields reasonable semantic segmentation of unstained living cells without any a priori information about their structures. Despite the lack of reference data (to prove strictly that the proposed feature vectors coincide with transparency), we believe that this method is the right approach to an intracellular (semi)quantitative and qualitative chemical analysis.
Köhlerillumination
The obtained quasi-spectra should not be considered as object features but are rather imaging process features. Due to the model-free nature of the method, the obtained classes reflect the observed data, not the internal structures of the objects. We think that the convenient bridge between the observed, phenomenological, spectra and the structure is machine learning, since it shows advantage of enormously good statistics (\(10^5\)–\(10^6\) samples per image) and compensate influence of the complicated shape.
Lonhus, K., Rychtáriková, R., Platonova, G. & Štys, D. Supplementary to Quasi-spectral characterization of intracellular regions in bright-field light microscopy images. Dryad Dataset.https://doi.org/10.5061/dryad.w0vt4b8p6.
Medium is, in each point of the projection onto a camera sensor plane, characterized by an unknown transparency spectrum \(T(x, y, \lambda )\).
Hair, J., Black, W., Babin, B. & Anderson, R. Multivariate Data Analysis (Cengage Learning EMEA, 2018). https://www.ebook.de/de/product/33402106/university_of_south_alabama_joseph_hair_william_black_barry_louisiana_tech_university_babin_rolph_drexel_university_anderson_multivariate_data_analysis.html.
For most biologically relevant objects multi-photon interactions can be neglected13. Thus, a linear response model can be used for description of the measurement process. The model consists of four entities (Fig. 1) which are physically characterized as follows:
The gradient calculation is different for the first and further iterations. In the first iteration, there is no valid spectral guess, and the gradients and the edge detection are calculated for the original image. The used edge detection algorithm requires a single-channel (grayscale) image, however, the input image is RGB. We used the principal component analysis (PCA)19,20 and retained only the first principal component in order to obtain the maximal information on the grayscale representation of data.
For numerical optimization of Eq. (4), the covariance matrix adaptation evolution strategy (CMA-ES)21 was proved to be a suitable robust global optimization method22. Due to the mean-field nature of the second part of Eq. (4), the method is iterative with, usually, 20–40 iterations to converge. In each iteration step and for each pixel, the minimization is conducted until a predefined value of loss function is achieved. Different schedules of tolerance changes can be applied, authors used the simplest one—linear decrease. The algorithm flow chart is presented in Fig. 2.
Thanks to the Office of the VPR, we are now able to offer optimized cleared sample imaging for the Zeiss Lightsheet Z.1. The optics accommodate clearing methods with a refractive index of approximately 1.45, including CLARITY, LUMOS, Scale-S, CUBIC2 and several other techniques. The Lightsheet Z.1 is an ideal imaging option for many projects using cleared specimens, including those prepared using the X-Clarity active clearing system available in the Light Microscopy Core. The Z.1 sample chamber can accept and image tissues as large as a mouse brain (see Zeiss movie of EGFP-labelled brain and images from this facility of rodent brain vasculature). Using the 5x objective with 2.5 fold zoom, an effective magnification of up to 125x can be achieved. Multiple individual Z-stacks (tiles) may be collected and stitched together to create a single 3D rendering of the entire tissue.
Wachman, E. S. et al. Simultaneous imaging of cellular morphology and multiple biomarkers using an acousto-optic tunable filter–based bright field microscope. J. Biomed. Opt. 19, 056006 (2014).
The proposed method appears to be quite robust to parametrization inaccuracies and errors. We used the quantum efficiency curves supplied by the vendor and measured the spectrum, which is reaching the sample, and obtained practically feasible results. The method can be applied to any bright-field microscope set-up. The only condition is to access the camera primary signal immediately after the analog-to-digital conversion, before some kind of thresholding, white-balancing, gamma correction, or another visual improvement is employed.
Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer International Publishing, Cham, 2015).
The proposed method of calibration is universal, applicable to any camera producing raw data, and is not based on any assumption about nature of image or underlying acquisition processes. The algorithm itself is post-processing technique and requires calibration images and data from spectrometer. All results described below were obtained after this image correction. The calibration and correction routines are implemented as a native application and are freely available.
Criticalillumination
For classical bright-field microscopy, the most approaches rely on trained (or fitted) models based on a set of reference images with known properties8. Most mature methods rely on the principal component analysis9 or sparse spatial features10. Some of such techniques do not aim to full-spectral reconstruction but rather to a more effective colour resolution (which has been very useful in distinguishing fluorescence peaks)11. The main disadvantage of such methods is the global approach, which is feasible only for homogeneous images. Most “local” methods include different artificial neural networks12. and can work well if they are trained with a reference dataset that is similar to the observed system. The data of this kind almost never occurs in microscopy due to bigger variability of objects in microworld (for the reason that, e.g., known objects are artificial, an investigated system is living, or the in-focus position can be ambiguous). This gives a cutting edge to physically inspired methods which make no assumption about type of observed object and does not use special equipment except of a classical bright-field microscope.
Evaluation of the asset of the proposed method of the quasi-spectral reconstrunction (Fig. 5a–e) for clusterization against the raw data is quite tricky, because we have no ground truth. But, nevertheless, there are numerous methods of quality estimation for unsupervised learning24. Such methods are usually used for determination of the optimal number of clusters in datasets. Our aim is slightly different—to compare the accuracy of the clusterization for two datasets with different dimensionality. This naturally yields a choice of metric—cosine—since this metric is normalized and not affected by magnitude to such an extent as the Euclidean metric. Another fact that can be utilized from the data is that each single image provides \(10^5\)–\(10^6\) points. It enables us to use a distribution-based method for estimation of clustering accuracy. One of the most general method from this family is gap statistics25, which is reported to perform well and robust even on noisy data, if a sufficient number of samples is present24. As the clusterization method itself, we used k-means with 10 clusters and the cosine metric. Figure 5g shows gap criteria for time-lapse raw images and relevant spectral counterparts. The proposed method leads to better and more stable clustering concurrently. We also investigated different dimensionality reduction techniques (namely PCA19, Factor Analysis26, and NNM27), which can be applied before the clustering, but these techniques did not bring any improvement in cluster quality. Despite that, these techniques can be used, e.g., for digital staining and highlighting the details in objects, see Fig. 5f.
Widefield microscope
Hansen, N. & Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9, 159–195 (2001).
Lugagne, J.-B. et al. Identification of individual cells from z-stacks of bright-field microscopy images. Sci. Rep. 8, 11455 (2018).
The primary aim of the method is, in the best possible way, to characterize individual cell parts physically (by a colour spectrum) and, consequently, identify them as different cell regions. Currently, the standard approach for the recognition of organelles is fluorescent (or other dye) staining. In unstained cells, identity of an organelle is guessed from its shape and position. Our approach gives the promise to be able to identify the organelles according to their spectra. However, in order to obtain the same spectra for cells of different samples, full reproducibility of the whole experiment such as optical properties of a Petri dish, thickness and colour of cultivation medium has to be ensured.
The most convenient, classical way of obtaining such a spectrum is to modify a measuring device (microscope). It is mostly done using single scanning interferometers2, matrices of them3, matrices of color filter arrays4, or other adjustable media5,6. Such technical arrangements can be further successfully coupled with machine learning methods as well7. Purely instrumental methods are certainly the most correct but require sophisticated equipment and are not fully compatible with typical bright-field techniques like depth-wise z-scanning. Due to both hardware and algorithms, this makes these methods rather a separated group than a subtype of the bright-field methods.
(a) Light spectra of grayscale layers measured by a fibre spectrophotometer, (b) declared spectra of RGB camera filters, (c) calculated spectra of incoming light reaching the blue camera channel. The integral under the curve (c) was used as a calibration value for the construction of the calibration curve. (d) Calibration curves for selected blue camera pixels lying in the same column (pixel indices are depicted).
Parmar, M., Lansel, S. & Wandell, B. A. Spatio-spectral reconstruction of the multispectral datacube using sparse recovery. In 2008 15th IEEE International Conference on Image Processing (IEEE, 2008).
Camera sensor is described (by purely phenomenological approach) by exposure time \(t_e\) and energy load curve \(I_c = f(E)\), where \(I_c\) is the pixel sensor output (intensity) and E is energy absorbed by the pixel sensor during the exposure time. We assume that the image is not saturated and, thus, f(E) can be approximated linearly.
Mekaroonkamon, T. & Wongsa, S. A comparative investigation of the robustness of unsupervised clustering techniques for rotating machine fault diagnosis with poorly-separated data. In 2016 Eighth International Conference on Advanced Computational Intelligence (ICACI) (IEEE, 2016).
This novel method of extraction of quasi-spectra aims at a very challenging problem, which cannot be solved precisely even in theory: some information is irrecoverably lost. The method arises from very general assumptions on the measurement system. The method does not rely on any light-media interaction model or physical properties of the system, which makes this method quite universal. The obtained spectra are applicable in practice for visualization and automatic segmentation task. We intentionally did not consider questions of voxel spectrum, Z-stack spectral behaviour, and meaning of the compromised focus in order to keep the method and its application simple. We pose the described method as an ultimate information squeezing tool, which is a nearly model-free way how to compress the colour and spatial information into representation of the physically relevant features. We believe that, in the future, the method will find its use in robust, mainly qualitative, (bio)chemical analysis.
Additional information can be squeezed from the physical meaning of the observed image—neighbouring pixels are not fully independent. The observed object usually has no purely vertical parts (which is quite typical for cell-like structures) and other Z-axis related changes are not fast14. If this holds, the image can be treated as a continuous projection of the object’s surface (in optical meaning) onto the camera sensor. In this case, the neighbouring pixels correspond to neighbouring points in the object.
For biological samples which show almost no strong gradients of structural changes holds that the pixel demarcates the projected image. Formally, this criterion can be expressed as
In addition, let us assume that the object’s volume can be divided into subvolumes in a way that the transparency spectra inside a subvolume will be spatially continuous (in L2 meaning). This assumption is quite weak, because it can be satisfied only if the volumetric image has a subvolume of the size which is equal to the voxel size.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
An important issue that we have not investigated yet is the influence of sample thickness. The question remains what is the identity of the spectrum if the sample has a non-zero thickness. In Rychtáriková et al.1, we showed that the position of the effective focus differs even with the usage of a fully apochromatic lens. This is the biggest complication in interpreting the spectrum. In case of a relatively thick and homogeneous organelle it can be assumed that, in the centre of the focus, the contribution from geometrically different levels are similar. The full answer to this question would be given by a complete 3D analysis that has to be theoretically based on completely new algorithms and is currently out of the possibilities of our computing capacity. To this point, however, we allow to claim that the thickness of the sample affects mainly the integrals below the spectra, not the shapes of the spectra themselves. The usage of the cosine metric, which is, in effect, the angle between distance vectors and is insensitive to the magnitude, would help to mitigate this problem.
The flow chart of the method. The magenta lines denote the routes for the 1st iteration. The red and blue lines show the direct and indirect feedback between iterations, respectively.
The method essentially requires only three specific inputs: an image, incoming light spectrum, and camera filter profiles. The camera filter profiles are usually supplied with the camera or can be measured directly using an adjustable monochromatic light source. The incoming light spectrum is less straightforward, because the light emitted by the source is somehow altered by the light path. A convenient way is to replace the objective inlet by a cosine corrector with a spectrometer and measure the incident light spectrum. This implies that, in case of any substantial changes in the optical path (e.g., like the objective replacement), the incoming light spectrum has to be remeasured. In practice, it makes no problem to measure a set of spectra corresponding to a different objective, iris settings, etc.
Pech-Pacheco, J., Cristobal, G., Chamorro-Martinez, J. & Fernandez-Valdivia, J. Diatom autofocusing in brightfield microscopy: a comparative study. In Proceedings 15th International Conference on Pattern Recognition. ICPR-2000 (IEEE Comput. Soc., 2002).
where \(D_{k}\) is a central gradient in pixel k, \({\mathcal {T}}_b\) is a bias parameter (authors used \({\mathcal {T}}_b = 0.9\)), and \({\mathbb {B}}_{mn}\) is a set of points, which form lines between pixels m and n. The set of such points is calculated using the Bresenham algorithm16. The \({\mathcal {E}}_k\) indicates whether pixel k is classified as an edge. For this we used the Canny edge detection algorithm17 applied to a gradient matrix smoothed by a 2D Gaussian filter with the standard deviation equal18 to 0.5.
Hansen, N. The CMA evolution strategy: a comparing review. In Towards a New Evolutionary Computation (eds Lozano, J. et al.) 75–102 (Springer, Berlin Heidelberg, 2007).
Tibshirani, R., Walther, G. & Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. B 63, 411–423 (2001).
As clearly seen in Fig. 4d and e, the method has a non-trivial convergence behaviour of the variation coefficient (with the local maximum at iteration 2 and the local minimum at iteration 4) and of the cost. The behaviour of the iteration computing process is not related to changes in the schedule of tolerances. This behaviour in iteration process is linearly decreasing until iteration 10, and then is kept constant and the iteration process is stopped if the value of change is 0.01. We have not investigated the reason for this course deeply, but it is definitely repeatable for all the tested measurements (e.g., Fig. S3b,c). A natural way of visual verification of an image of transparency spectra is artificial illumination. We used a spectrum of the black body at T = 5800 K according to the Planck Law (Figs. 4c, S3a). The transformed image is quite similar to the raw data, which supports the method validity. To obtain such an image, we multiplied each pixel’s transparency spectra by the illumination spectrum and the CIE standard matching curves. The integrals of the corresponding curves gave coordinates in the CIE 1931 colour space.
The cells were captured using a custom-made inverted high-resolved bright-field wide-field light microscope enabling observation of sub-microscopic objects (ICS FFPW, Nové Hrady, Czech Republic)1. The optical path starts by two Luminus CSM-360 light emitting diodes charged by the current up to 5000 mA (in the described experiments, the current was 4500 mA; according to the LED producer, the forward voltage was 13.25 V which gave the power of 59.625 W) which illuminate the sample by series of light flashes (with the mode of light 0.2261 s–dark 0.0969 s) in a gentle mode and enable the videoenhancement29. The microscope optical system was further facilitated by infrared 775 nm short-pass and ultraviolet 450 nm long-pass filters (Edmund Optics). After passing through a sample, light reached an objective Nikon (in case of the live cells, CFI Plan Achromat \(40\times\), N.A. 0.65, W.D. 0.56 mm; in case of the fixed cells, LWD \(40\times\), Ph1 ADL, \(\infty /1.2\), N.A. 0.55, W.D. 2.1 mm). A Mitutoyo tubus lens (\(5\times\)) and a projective lens (\(2\times\)) magnify and project the image on a JAI camera with a 12-bpc colour Kodak KAI-16000 digital camera chip of \(4872 \times 3248\) resolution (camera gain 0, offset 300, and exposure 293.6 ms). At this total magnification, the size of the object projected on the camera pixel is 36 nm. The process of capturing the primary signal was controlled by a custom-made control software. The z-scan was performed automatically by a programmable mechanics with the step size of 100 nm.
The second part is computational. For each pixel, we constructed a piece-wise function S(M), where S is an integral of the spectrum measured by the fibre spectrometer in each point \(S_i\) and \(M_i\) is an intensity value in the image. Between these points, the function S(M) is linearly interpolated, see Fig. 3d. For a colour camera that we used, the algorithm is slightly different. Most of the RGB cameras are equipped with a Bayer filter, which effectively discriminates 3 sorts of pixels. Each ‘sort’ has a different dependence of the quantum efficiency on the wavelength, see Fig. 3b. These dependencies are usually supplied by the camera producer. In this case, the recorded spectrum should be multiplied by the corresponding efficiency curve prior to the integration. The result of the multiplication is shown in Fig. 3c.
Wang, Y., Yang, B., Feng, S., Pessino, V. & Huang, B. Multicolor fluorescent imaging by space-constrained computational spectral imaging. Opt. Express 27, 5393 (2019).
A live cell L929 in time lapse (a–c) at k-means clusterization, \(k=10\). The corresponding mean spectra of classes for images (a,c) are shown in (d,e). These spectra are pretty much similar, despite the different images. The gap criteria for the raw data and the relevant spectral counterparts are presented in (g). Dimensionality reduction techniques, e.g., PCA, can be used for better visualization and digital staining (f).
Lindner, M., Shotan, Z. & Garini, Y. Rapid microscopy measurement of very large spectral images. Opt. Express 24, 9511 (2016).
K.L. is the main author of the paper and of the novel algorithm, G.P. and R.R. are responsible for sample preparations, microscopy data acquisition, and image calibration, R.R. contributed to the text of papers substantially, D.Š. is an inventor of the videoenhanced bright-field wide-field microscope. D.Š. and R.R. lead the research. All authors read the paper and approved its final version.
It is worth mentioning that, for some real-life biological samples, the measurement model can be violated. We implicitly assume that light intensity reaching the camera chip is always lower than at the time of its production by a light source. The transparency coefficient is bounded by the range [0, 1]. Indeed, this is not always true because the sample can contain light-condensing objects (most of these objects are bubbles or vacuoles) which act as micro-lenses. It does not break the method generally but, due to inability to fulfil Eq. (2), the local optimization gives an abnormally high cost. Such objects should be eliminated from a subsequent analysis because their quasi-spectra are unreliable. After excluding those dubious regions (which occupy only a very small part of the image, provided they are present at all), the rest of the image can be analysed in an ordinary way.
University of South Bohemia in České Budějovice, Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Kompetenzzentrum MechanoBiologie in Regenerativer Medizin, Institute of Complex Systems, Zámek 136, 373 33, Nové Hrady, Czech Republic
Maloney, L. T. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 3, 1673 (1986).
Alvarez-Gila, A., van de Weijer, J. & Garrote, E. Adversarial networks for spatial context-aware spectral image reconstruction from RGB. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW) (IEEE, 2017).
In order to obtain reasonable local spectra, we must ensure that camera sensor pixels have homogeneous responses. From hardware point of view, they are printed as semiconductor structures and cannot be changed. Therefore, we introduced a spectral calibration in the form of post-processing routine, which is designed for obtaining equal responses from all camera pixels.
In order to extract a transparency profile from the proposed model, one has to solve an inverse problem for a system of three integral equations (in case of a 3-channel, RGB, camera). This cannot be solved directly, since the model is heavily underdetermined. (In this text, by terms “transparency” and “spectrum” we mean “quasi-transparency” and “quasi-spectrum” since this method determines only the properties of a microscopy image which are similar to the transparency spectra but not the transparency itself).
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic—project CENAKVA (LM2018099)—and from the European Regional Development Fund in frame of the project Kompetenzzentrum MechanoBiologie (ATCZ133) and Image Headstart (ATCZ215) in the Interreg V-A Austria–Czech Republic programme. The work was further financed by the GAJU 013/2019/Z project.
Lonhus, K., Rychtáriková, R., Platonova, G. et al. Quasi-spectral characterization of intracellular regions in bright-field light microscopy images. Sci Rep 10, 18346 (2020). https://doi.org/10.1038/s41598-020-75441-7
Zhu, S., Gao, L., Zhang, Y., Lin, J. & Jin, P. Complete plenoptic imaging using a single detector. Opt. Express 26, 26495 (2018).
Darkfieldmicroscopy
For visualization, very bright pixels which correspond to light-focusing structures in the sample (mostly bubbles that act as micro-lenses) and violate the assumptions of the model of the proposed quasi-spectral method were detected (as 99% percentile of intensities) and treated as saturated. After their elimination, the rest of intensities was rescaled to the original range.
where \(\vec {u}\) is a random vector and \(q, \epsilon\) are small numbers. This equation closely resembles the Lyapunov stability criterion. The \(\epsilon\) reflects the neighbourhood size and q is related to the degree of discontinuousness. It can be violated, if \(\vec {u}\) crosses a border between objects, but not inside a single object.
The acquired image data were corrected by simultaneous calibration of the microscope optical path and camera chip as described in Suppl. Material 1. In this way, we obtained the most informative images on spectral properties of the observed cells.
Elboher, E. & Werman, M. Efficient and accurate Gaussian image filtering using running sums. In 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA) (IEEE, 2012).
Lichtscheidl, I. K. & Foissner, I. Video microscopy of dynamic plant cell organelles: principles of the technique and practical application. J. Microsc. 181, 117–128 (1998).
where \(I_c\) is the image intensity at a given pixel. All observable, biologically relevant, processes are slow compared with the camera exposure time (usually in a few ms) and, therefore, the outer integral can be eliminated. More importantly, let variable f, which reflects the dependence between the spectral energy and the sensor response, be 1. The multiplication inside the internal integral is commutative, which allows us to introduce an effective incoming light \(L_c(\lambda ) = S(\lambda )\cdot F_c(\lambda )\). These all mathematical treatments give the reduced equation for the measurement process as
In order to verify the benefits of the clusterization of the obtained spectra using k-means against the direct image clusterization, simple phantom experiments on microphotographs of oil-air and egg protein–air interface, respectively, were conducted. These phantom experiments showed that the spectral clusterization resulted in both a higher cluster accuracy and a lower variation. Moreover, in order to prove the capacity of the method, we applied supervised segmentation, namely a classical semantic segmentation network, U-Net28. It is a symmetric encoder-decoder convolution network with skip connections, designed for pixel-wise segmentation of medical data. One of the strongest advantage of this network is a very low amount of data needed for successful learning (only a few images can be sufficient for this purpose). We employed 6 images for the network training and 1 image for the method validation. To avoid the data overfitting in the training phase, aggressive dropout (0.5, after each convolution layer) and intensive image augmentation (in detail in Suppl. Material 1) was rendered. We compared the performance of the U-Net network for the original raw images, contrast-enhanced images, and spectral images (Fig. 4f). The results of segmentation for the spectral images showed a significantly (\(>10\%\)) increased accuracy, intersection over union (IoU) 0.9, and a faster convergence speed (8 epochs vs. 40 epochs for contrast-enhanced images). The results were stable to changes in the training and test sets (even when using a single validation image or a set of augmented images derived from validation as mentioned above).
Pearson, K. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 2, 559–572 (1901).
Indeed, classical bright-field microscopy measures properties of incoming light affected by a sample. If multi-photon processes are negligible and, then, intensities are reasonable, a linear response model can be used. Then, a medium observed in such a model can be fully characterized by a transparency spectrum \(T(\vec {r})\) defined for each pixel. Such a spectrum can give ultimate information about the medium and boost subsequent machine learning methods significantly.
Bright-field microscopy in videoenhancement mode shows an unprecedented success as a method of living object investigation since it is cheap and non-intrusive in preparation of samples, and, in its innovative set-up1, has an excellent spatial and temporal resolution, which opens many possibilities for automation. Classical image-processing techniques such as feature extraction or convolution neural networks do not work so well due to huge variability in microworld data. It calls for image pre-processing techniques that would utilize all available information to supply rich, physically relevant feature vectors in subsequent methods of analysis.
The method of quasi-spectra extraction was applied to a raw image of a live cell from a bright-field wide-field light microscope (a) combined with the effective light spectra (b). The cost (d) and variation coefficient (e) demonstrate a quite non-monotonous behaviour. This implies a self-organization of the model. After the reconstruction of the transparency spectra, the image can be viewed under arbitrary illumination such as the absolute black body with T = 5800 K (c). Comparison of the quality of U-Net supervised segmentation for original (raw), contrast-enhanced, and quasi-spectral images (f) shows advantages of the proposed quasi-spectral approach.
Heist, S. et al. 5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light. Opt. Express 26, 23366 (2018).
Threshold for the selection of the neighbourhood of pixel m, i.e., the Euclidean distance between pixels m and n; \({\mathcal {T}}_{ED} = 1\)
Rychtáriková, R. et al. Super-resolved 3-d imaging of live cells organelles’ from bright-field photon transmission micrographs. Ultramicroscopy 179, 1–14 (2017).
The first part of calibration is experimental and aimed at measuring each pixel’s sensitivity. We took a photograph of the background through a set of gray layers with varying transparency, covering a 2-mm thick glass (type Step ND Filter NDL-10S-4). After that, we replaced the microscope objective by a fibre of a spectrophotometer (Ocean Optics USB 4000 VIS-NIR-ES) to record spectra corresponding to each of the filters, see Fig. 3a.
The software for quasi-spectral characterization of images, the relevant Matlab codes, the software for image calibration, the U-Net segmentation package, and testing images are available in the supplementary materials at the Dryad Data Depository30.
Cells fixation was conducted in a tissue dish. The nutrient medium was sucked out and the cells were rinsed by PBS. Then, the cells were treated by glutaraldehyde (3%) for 5 min in order to fix cells in a gentle mode (without any substantial modifications in cell morphology) followed by washing in phosphate buffer (\(0.2 \, {\hbox {mol L}}^{-1}\), pH 7.2) two times, always for 5 min. The cell fixation was finished by dewatering the sample in a concentration gradient of ethanol (50%, 60%, and 70%) when each concentration was in contact with the sample for 5 min.
In the non-first iterations, there is a spectral guess and, instead of the gradient, we used the cross-correlation with zero lag: \(D_{k} = T_{k-1}(\lambda )\star T_{k+1}(\lambda )\). The vertical and horizontal gradient were merged by the Euclidean norm.