Teague derived the TIE in 1983 starting from Helmholtz’s equation (cf. Eq. (5.8)) under the approximation of a slowly varying field along the z-axis: −k∂I(x,y)∂z=I(x,y)⋅∇⊥2ϕ(x,y)+∇⊥I(x,y)⋅∇⊥ϕ(x,y),(5.12) where I (x, y) is the at-focus intensity image (related to the complex amplitude in Eq. (5.8) by I = |U|2), and Δ⊥ is gradient operator in the lateral directions, i.e., in the xy plane. The symbol Φ denotes the phase difference (cf. Eq. (5.10)), but Φ was used instead of Φdiff as the phase appears only in differential terms in the TIE. In other words, the phase in TIE is defined up to an additive constant which makes no difference between Φ and Φdiff. This equation can be further simplified if we assume ideal phase objects, i.e., I (x, y) = constant = I0, to the following form: −k∂I(x,y)∂z=I0∇⊥2ϕ(x,y).(5.13) The axial derivative at the left-hand side of Eq. (5.12) or Eq. (5.13) can be measured: First, acquire a bright field image at focus I0. Defocus the microscope by a distance Δz and acquire another image I(Δz): The finite-difference approximation of the derivative is then given by I(Δz)−I0Δz. After estimating the axial derivative, the only unknown which is left in the TIE is the phase. Therefore, the TIE can be solved for Φ yielding a quantitative phase map.

RHA Ras · 2007 · 36 — Here we demonstrate how the internal reflection spectra obtained using s-polarized light are related with the corresponding p-polarized spectra. The relation ...

Consider an object with height h standing at a distance d in front of a converging lens with a focal length f < d. Naturally, the lens creates an image of this object. The question then arises as how we can determine the height of the image h′ and its distance d′ to the lens. From a geometrical optics perspective, the image formation process can be described using three simple rules (cf. Figure 5.1): An incident light ray which passes through the optical center O does not suffer any refraction.An incident light ray parallel to the optical axis is refracted passing through the image focal point F′.An incident light ray which passes through the object focal point F is refracted parallel to the optical axis. As shown in Figure 5.1, the three rays intersect at a point positioned at distance d′ from the lens. Obviously, two rays are sufficient to geometrically construct this intersection point. The image acquired at d′ is defined as an in-focus image. On the other hand, an image acquired at a longer or a shorter distance than d′, is called defocused image. In this context, an image of a point source (such as T in Figure 5.1) is infinitely small at focus (abstracted as a point T′ in Figure 5.1), but it is larger than a point for defocused images.

The principle of compound microscope models the magnification mechanism. Additionally, depending on how the sample is illuminated and which kind of information is carried by light rays, light microscopes can be further classified into subcategories: bright field, fluorescence, phase contrast, quantitative phase, and others. In the following sections, more details will be given about each of the aforementioned microscopic modalities.

Microscopybiology

In fact, the phase shift introduced by a phase object can be given as follows: ϕdiff(x,y)=k∫z1(x,y)z2(x,y)Δn(x,y,z)dz,(5.10) where k is the wavenumber of the incident light, Δn (x, y, z) is the difference in the refractive index between the object and surrounding medium, z1 and z2 are the start and end coordinates of the light path through the object. If the object has a homogeneous refractive index, Eq. (5.10) reduces to: ϕdiff hom (x,y)=k⋅Δn⋅q(x,y),(5.11) where q (x, y) is the object thickness at (x, y). The product of refractive index with the geometric length of light path is usually termed optical path length. In addition, the difference of two optical path lengths is called optical path difference. Therefore, the numerical value of phase is interpreted as optical path difference between the object and the surrounding medium. The constant k is typically ignored.

Maier A, Steidl S, Christlein V, et al., editors. Medical Imaging Systems: An Introductory Guide [Internet]. Cham (CH): Springer; 2018. doi: 10.1007/978-3-319-96520-8_5

For most people, I recommend choosing one of the six most popular lens mounts: Nikon DX (APS-C DSLR), Canon EF-S (APS-C DSLR), Sony E-Mount (APS-C mirrorless), ...

Top row: Hematoxylin-eosin, Azan, Multi-cytokeratin (AE1, AE3). Bottom row: Grocott, May-Grünwald-Giemsa, Turnbull Blue. Images courtesy of FU Berlin, Germany.

Microscopynotes

USB 3.1 Type C to USB 3.0 Micro B - This cable easily connects a computer with a USB Type C port to an external hard drive, Smartphone or tablet with a USB 3.0 Micro B port; Transfer files from a portable USB 3.0 hard drive or sync data from your Smartphone.

Phase contrast (cf. Section 5.5) is convenient for qualitative unstained imaging of transparent specimens. However, it is not suitable for obtaining quantitative phase values for two reasons: Firstly, phase information is perturbed by artifacts, called phase halos, in image regions which surround phase objects (cf. Figure 5.9(c)). Secondly, Zernike’s approach which links an observed intensity value to the corresponding phase value is valid only for very small phase shifts.

If we consider light as a wave with amplitude A and wavelength λ, we observe that amplitude objects reduce the wave amplitude by absorption. Phase objects cause a phase shift due to differences in the refractive index inside and outside the object. (more...)

In clinical routine cells in suspension are only investigated infrequently. Instead, the most common investigation techniques for bright field microscopy are cytology, where cells and their inner structure are investigated, and histology, where the embedding of cells into the surrounding tissue architecture Geek Box 5.1Stains for Histology and CytologyTo highlight cellular structures, sections from tissue biopsies and also cytology slides are often dyed or stained. The most common form of stain in histology is a mixture of two substances called hematoxylin and eosin, where the hematoxylin color stains cell nuclei blue and cytoplasm and other cellular structures are dyed in magenta by eosin. Dyes are furthermore used to assess the amount of certain substances, e.g. copper or iron, or biologic structures adhering to certain biomarkers. Besides a main color, often a secondary (or even third) color with strongly different spectral shape is used to dye other cellular compartments and enhance the contrast, a process called counterstaining. In order to prepare a sample, it usually undergoes the process of fixation with formaldehyde and embedding in paraffin wax. The fixation stops a great part of the biologic processes and ensures a proper quality of the slide and a slow degradation process. Embedding in a block of wax is a precondition to cutting thin slices of constant thickness, which are then placed on a microscope slide and covered with a coverslip.Different stains for histology and cytologyTop row: Hematoxylin-eosin, Azan, Multi-cytokeratin (AE1, AE3). Bottom row: Grocott, May-Grünwald-Giemsa, Turnbull Blue. Images courtesy of FU Berlin, Germany. is described. For both techniques, staining of the sample plays an important role (see Geek Box on page 77).

We perceive the physical world around us using our eyes, but only down to a certain limit. Objects with a diameter smaller than 75μm cannot be recognized by the naked eye, and due to this reason, they remained undiscovered for the most of human history.

Jul 15, 2014 — Laser beam divergence refers to the spreading out of a laser beam as it travels through space, while solid angle measures the total amount of ...

In modern microscopes, the objective lens is characterized by its magnification and numerical aperture. The magnification was defined above in Eq. (5.4). The numerical aperture quantifies the capability of a lens to gather light. It is defined as follows: NA=nsinθ,(5.5) where n is the refractive index of the medium between objective lens and specimen (nair ≈ 1) and θ is the half angle of the maximum light cone which the lens can collect (cf. Figure 5.5). Since the image formed by the objective lens is real, it can be captured by a physical detector. For instance, it can be recorded by a CCD chip, and hence, the magnified view can be saved as a digital image which can be further processed by a digital computer.

As mentioned earlier, in bright field microscopy, light absorption is responsible for image formation. Objects which absorb light are called amplitude objects since they affect light amplitude. Transparent objects, on the other hand, hardly alter the amplitude of light. They, however, delay light wave introducing a phase shift, and thus, they are given the name phase objects. We demonstrate this effect visually in Figure 5.10 and introduce the underlying math in Geek Box 5.3.

MicroscopyPDF

Fluorescence microscopes deliver images of high contrast when compared to bright field images. In addition, due to the fact that fluorescence can be incited by specific biological or physical processes, scientists were able to find many applications of fluorescence microscopy in materials science and cellular biology. To give just one example, a widely-used technique for cell viability detection (cf. Figure 5.8) is based on imaging of a fluorescent dye called propidium iodide (PI). Viable cells are usually selectively permeable, i.e., they do not allow molecules to freely cross the cellular membrane. When a cell dies, this exclusion property is lost allowing PI to leak through the cellular membrane toward cell interior. PI binds then to RNA and DNA inside the penetrated cell which drastically enhances the fluorescence. Therefore, dead cells can be easily distinguished from the non-stained viable cells.

Super Speed - Transfer data to and from all your USB-C devices at speeds of up to 5 Gbps, 10x faster than USB 2.0, you can transfer files anytime and anywhere with incredible speed.

Recently, a novel method of fluorescence microscopy imaging has gained attention in research: In Confocal Laser Endomicroscopy (CLE), a fiber bundle carrying laser light in the cyan color spectrum is inserted into cavities of the human body, usually through the accessory channel of a normal endoscope. With high magnification ratios, it is being used for structural tissue analysis in vivo, i.e. in the living patient. Due to the confocal construction, a single focal plane in a defined depth can be visualized as a sharp image since the image is not tainted by scattering light. Prior to the examination, a fluorescent contrast agent is given to the patient intravenously, enriching in the intercellular space and thus making outlining cellular structures possible.

The diameter measurements given here are for a blood cell, a typical bacterium, an influenza virus, a DNA molecule, and a uranium atom.

This series are developed to be used with 1 format sensor like Sony IMX183, IMX255,. IMX267, IMX305 and 1.1 format sensor like Sony IMX253, IMX304. 1. 9 ...

Lightmicroscopy

Image formation in a compound microscope. Symbols Fo, F′o, Fe, and F′e represent the objective object focal point, objective image focal point, eyepiece object focal point, and eyepiece image focal point, respectively. A human observer (more...)

If you look through a magnifying glass at an object located within the focal length of the lens, you see a magnified upright virtual image of the object. Conceptually, this is a simple microscope. The compound microscope (cf. Figure 5.4) extends this basic principle by using at least two converging lenses. The lens which is closer to the specimen is called objective lens. It creates a real magnified inverted image Go of the specimen. This requires that the specimen distance to the objective do is in the range fo < do < 2fo, where fo is the focal length of the objective. The second lens is called eyepiece as it is the component through which a user of the microscope observes the sample. The distance of Go to the eyepiece de is, by construction, less than the focal length of the eyepiece (de < fe). Consequently, the eyepiece lens creates a magnified virtual image Ge of Go. Since the image of the first lens is an object for the second one, the total magnification is the product of the two lens magnifications.

JGR Optics Inc. listing from the optics.org Photonics Buyers Guide.

Or other close approximations of it depending on the considered upper-limit of numerical aperture and definition of resolving power.

Feb 27, 2019 — Discover why some eyeglass lenses need anti-reflective coating more than others. But, everyone will look and see better with AR coating.

In the past few years, the so-called superresolution microscopy became an active research trend. Today, based on this technology, there are microscopes which achieve a resolving power of about 10 nm. While this number is inferior to electron microscopy resolution, the breakthrough lies in the fact that this is achieved using visible light. As stated earlier in this text (cf. Section 5.7), the attainable resolution using visible light is limited to 200 nm. May we then conclude that the theory which led to the diffraction limit in light microscopy is flawed? In fact, superresolution microscopy is based on alternatively turning fluorescent molecules in a specimen on and off. Two adjacent fluorescent molecules with a distance less than 200 nm will not be resolved as two points in a superresolution microscope when both of them are turned on simultaneously. However, this will be the case, i. e., they will be resolved as two points, if only one of them is activated at a specific time, and in addition, there is a mechanism to control this activation process. Superresolution microscopy techniques differ in the way in which this on/off switching is implemented. Major technologies in this field today include: stimulated emission depletion (STED), reversible saturable optical fluorescence transitions (RESOLFT), and stochastic optical reconstruction microscopy (STORM).

MicroscopyPPT

Let us consider a converging lens with d > f (cf. Figure 5.1). From the similar triangles T OB and T′OB′, one can directly write: h′h=d′d(5.1) The same applies for triangles TFB and FOL: h′h=fd−f(5.2) Combining Eq. (5.1) and Eq. (5.2) yields: fd−f=d′dfd=d′d−d′ffd+d′f=d′d Dividing by fdd′ yields the thin lens equation: 1d+1d′=1f(5.3) Eq. (5.3) was derived in this text for real images in a converging lens. Nevertheless, it can be also used for virtual images and/or diverging lenses under the following sign conventions: 1) d′ is negative when the image is at the object side of the lens (similar to the case in Figure 5.2), otherwise it is positive. 2) f is negative for diverging lenses. Moreover, if we add a third sign convention stating that h′ is positive for upright images and negative otherwise, then Eq. (5.1) and Eq. (5.2) can be generalized to the following form: M=h′h=−fd−f=−d′d(5.4) Based on the above-mentioned sign conventions, the magnification M is positive for upright images and negative for upside-down images. This generalization, i.e., Eq. (5.3) and Eq. (5.4), can be proved to be correct by applying the three rules of geometric image formation and employing triangle similarity for each specific setup. Moreover, based on Eq. (5.4), the following conclusions can be drawn: The image of an object in a converging lens is magnified (|M| > 1) when d < 2f, has the same size of the object when d = 2f, and demagnified (|M| < 1) when d > 2f.The image of an object in a diverging lens (f < 0) is demagnified.

In the previous section, phase was employed to obtain more contrast of transparent specimens. At this point, we may ask the following question: what does the numerical phase value tell us about the physical properties of a specimen? As discussed in Geek Box 5.4, we only observe the difference of the phases of two waves and are unable to observe an absolute value.

X-ray and UV radiation, being a part of the electromagnetic spectrum, belong to invisible light. The term light microscopy is, however, restricted to visible light in this text.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Sep 12, 2024 — In conclusion, microscope objective lenses are an essential part of a microscope and are used to magnify the specimen being observed. They ...

Widely Use - Compatible with devices with a USB 3.0 Micro-B port such as the Samsung Galaxy S5, Galaxy Note 3, Galaxy Note Pro 12.2, and portable USB 3.0 external hard drives including Toshiba Canvio, Seagate FreeAgent, and Western Digital (WD) My Passport and Elements hard drives.

Image

Image

Why GenICam Standard? ○ How does it work? ○ How is the standard committee organized? ○ Who is driving GenICam? ○ What is the status and the roadmap?

While a bright field microscope utilizes light absorption of a sample, a fluorescence microscope makes use of another natural phenomenon called, unsurprisingly, fluorescence. Some special materials, when illuminated with light having a specific wavelength, emit light with another wavelength. As shown in Figure 5.6(b), an excitation filter is required to select a part of the electromagnetic spectrum for exciting the fluorescent materials in specimen. Another filter is then utilized to separate the emitted light from that used in the excitation process.

Typically, the density and thickness of a specimen are space-variant. Consequently, specimen points absorb light differently, i.e., the energy of light after passing through the specimen is, likewise, space-variant. Figure 5.6(a) schematically shows how this fact can be utilized in a microscopic setup. The condenser shown in the figure plays the role of concentrating light coming from a light source at the specimen. The specimen information is encoded in the intensity of light wave which reaches the objective. Background or the part of the scene which does not contain dense objects tends to be bright in the resulting image. This observer impression gave the technique its name. Bright field setup is the number-one choice whenever minimization of expenditure or implementation difficulties are main concerns. An example of a bright field image of cells is shown in Figure 5.7.

The resolving power of a microscopic system is defined as the minimum distance between two point sources in the object space for which they are still discernible as two points in the image plane. Intuitively, the two points are distinguishable as long as the sum of the two corresponding Airy patterns contains Geek Box 5.5Transport of Intensity Equation (TIE)Teague derived the TIE in 1983 starting from Helmholtz’s equation (cf. Eq. (5.8)) under the approximation of a slowly varying field along the z-axis: −k∂I(x,y)∂z=I(x,y)⋅∇⊥2ϕ(x,y)+∇⊥I(x,y)⋅∇⊥ϕ(x,y),(5.12) where I (x, y) is the at-focus intensity image (related to the complex amplitude in Eq. (5.8) by I = |U|2), and Δ⊥ is gradient operator in the lateral directions, i.e., in the xy plane. The symbol Φ denotes the phase difference (cf. Eq. (5.10)), but Φ was used instead of Φdiff as the phase appears only in differential terms in the TIE. In other words, the phase in TIE is defined up to an additive constant which makes no difference between Φ and Φdiff. This equation can be further simplified if we assume ideal phase objects, i.e., I (x, y) = constant = I0, to the following form: −k∂I(x,y)∂z=I0∇⊥2ϕ(x,y).(5.13) The axial derivative at the left-hand side of Eq. (5.12) or Eq. (5.13) can be measured: First, acquire a bright field image at focus I0. Defocus the microscope by a distance Δz and acquire another image I(Δz): The finite-difference approximation of the derivative is then given by I(Δz)−I0Δz. After estimating the axial derivative, the only unknown which is left in the TIE is the phase. Therefore, the TIE can be solved for Φ yielding a quantitative phase map.Earlier in this text, it was mentioned that ideal phase objects are invisible in bright field microscopy. In fact, as demonstrated in Figure 5.11, the aforementioned statement is correct only under the condition that the image is acquired at focus. This phenomenon, i.e., the possibility to visualize phase objects in bright field microscopy, can be interpreted in the light of the TIE. The contrast obtained by defocusing is numerically represented by the left-hand side of Eq. (5.13). The right-hand side reveals that this contrast is, in fact, phase information. The employment of defocusing to visualize transparent samples in a bright field setup is sometimes called defocusing microscopy. two distinct peaks. However, the condition under which the two peaks are considered distinct, can be defined in several ways. This led to different, but similar, definitions of the resolving power. According to Rayleigh, it is given by the radius of Airy disk dmin= dAiry(cf. Figure 5.12(b)). A slightly different definition, known as Abbe criterion, is given as dmin=0.5λNA.

Microscopyjournal

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Typical light detectors such as CCD chips or retina in our eyes can recognize amplitude variations but they are insensitive to phase distortion. In the 1930s, the Dutch physicist Frits Zernike came up with a brilliant trick for converting the invisible phase shift to a visible amplitude change using an optical filter. His contribution is the basis for a long-established technique in laboratories today known as phase contrast. Figure 5.9(a) shows a bright field image of a sample dominated by amplitude objects. In this particular example, they are cells in suspension. Figure 5.9(b) also shows a bright field image, but of a sample dominated by phase objects. The sample contains ultra-thin adherent cells. In Figure 5.9(c), the same specimen of Figure 5.9(b) is shown, but under a phase contrast microscope. A considerable improvement in contrast and information content can be clearly seen in the phase contrast image.

Long-Lasting Performance & Tangle-free Design - Premium USB cable adds additional durability and tangle free. Ultra sturdy construction & PVC jacket and a USB 3.0 Type C connector with model strain relief, makes the cable reliable performance and durable.

FREE SHIPPING* on all orders over in Canada !All orders under , the cost of shipping is only ! *Free shipping is not available when the shipping address is a remote location.More >>

So far, we could geometrically construct the image of an object in a diverging or a converging lens. At this point, we may ask whether there are closed-form equations which relate the object height h to the image height h′, or the object-lens distance d to the image-lens distance d′.

Figure 5.3 shows the result of applying the rules of image formation in a diverging lens when d < f. It should be noted, however, that: Contrary to the case of converging lenses, when applying these rules on diverging lenses, the image focus F′ is at the side of incident light rays and the object focus F is at the other side of the lens. Similar to the case described in Figure 5.2, the image is upright and virtual. However, in contrast to Figure 5.2, it is demagnified. We obtain this result with a diverging lens when d > f as well.

Types ofmicroscopy

Informally speaking, at a point in space r = (x, y, z), we can imagine the light activity as a particle dancing in time according to eiwt, where t is time and ѡ = 2πξ is the angular frequency which determines light color. In general, this dance is amplitude-scaled and phase-shifted differently at each point in space. Consequently, the wave/particle function ψ(r, t) can be modeled as follows: ψ(r,t)=A(r)ei(ωt+ϕ(r))=A(r)eiϕ(r)eiωt=U(r)eiωt.(5.6) The term U(r) encodes both amplitude change A(r) and phase shift ϕ(r) as a complex number, and is thus called complex amplitude of the wave. Eq. (5.6) is not sufficient to describe a wave unless ψ fulfills the celebrated wave equation: ∂2ψ∂t2=c2∇2ψ,(5.7) where c is the speed of light in the propagation medium, and ∇2=∂2∂x2+∂2∂y2+∂2∂z2 is the spatial Laplacian. Assuming that ψ can be factorized as ψ(r, t) = ψr (r)ψt(t) (which is the case in Eq. (5.6)), one can derive the time-independent wave equation, also known as Helmholtz’s equation: ∇2U(r)+k2U(r)=0,(5.8) where k is defined as k=ωc and called wavenumber. An important class of solutions for Helmholtz’s equation is given by the following complex amplitude: Ul(r)=Aleik⊤r.(5.9) In this solution, the amplitude is constant everywhere with a real value Aℓ whereas the phase is linearly dependent on position ϕl=k⊤r=xkx+yky+zkz. In order for Eq. (5.9) to satisfy Helmholtz’s equation, k must fulfill kx2+ky2+kz2=k. This fact can be verified by setting U(r) = Uℓ(r) in Eq. (5.8). Moreover, the locus of points in space for which Uℓ(r) = constant, is a plane with normal vector k. Therefore, waves described by Eq. (5.9) are called plane waves.

The limiting case may be obtained by setting the diffraction angle equal to the largest angle that can be collected by the objective. From this, Abbe was able ...

In order to enhance microscopic resolution, one needs to employ light of shorter wavelength and/or an objective of higher numerical aperture. Using shorter wavelengths will be considered in the next section. The numerical aperture, as revealed by Eq. (5.5), is theoretically upper-limited by unity when air (nair≈ 1) is the medium between the specimen and the objective. In order to go beyond this limit, microscope manufacturers designed objectives which can function when a medium of higher refractive index such as water (nwater ≈ 1.33) or oil (noil≈ 1.51) is embedded between the specimen and the objective. This led to the development of water immersion objectives and oil immersion objectives.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Diffraction barrier: Due to diffraction, the image of a point source is an Airy pattern. The resolving power dminof a microscope is thus limited by the width of this pattern.

We're a precision optical company that has been ranked amongst the most respected brands. Our products continually set new standards in each of their respective ...

Quantitative phase microscopy is an umbrella term for a set of techniques by which it is possible to obtain reliable quantitative phase information. Geek Box 5.5 discusses one of the methods to determine quantitative phase in detail: the transport of intensity equation (TIE). Due to the quantitative nature of TIE results, it can be utilized to compute specimen physical descriptors which are difficult to obtain using phase contrast. For instance, it can in principle be used for estimating cell thickness and volume in biological cell cultures. In general, the TIE seems to be attractive when compared to phase contrast for at least two reasons: 1) It is possible to obtain high-contrast phase images using a bright field microscope which is cheap and easy to implement compared to a phase contrast microscope. 2) TIE yields quantitative rather than qualitative phase information. However, every new technique comes with its own problems, and TIE is by no means an exception to this rule. In fact, estimating the axial derivative is very sensitive to the selection of defocus distance Δz. In addition, a TIE solution is prone to be perturbed by a low-frequency bias field which needs to be corrected.

Reversible USB Type-C Connector - The cable’s USB Type-C connector offers a user-friendly reversible design that makes it possible to easily connect the cable to a device. No more wondering which way faces up when plugging it in.

There are at least two shortcomings of fluorescence imaging: Firstly, staining may cause some undesired effects on the sample under study. For instance, it was shown that the dyes used in cell viability detection affect cell stiffness. Secondly, what we see under fluorescence microscopy is the activity of fluorescent dyes which, in general, does not reveal structural information. Moreover, these fluorescent dyes do not always cover the entire imaged object. These two factors lead to incomplete shape information. For confocal laser endomicroscopy, also fluorescent dyes are employed, yet in a different setup which is discussed in Geek Box 5.2.

If we set the wavelength in Eq. (5.14) to the wavelength at the center of the visible spectrum λvisible ≈ 550 nm and numerical aperture to the theoretical upper-bound of oil-immersion numerical apertures NAbest = 1.51, we obtain a Rayleigh resolution of dminbest=222nm≈0.2μm. This value2 is often cited as the resolution limit of optical microscopy. Two distinct points in object space with a distance less than 0.2 μm will be imaged as a sum of two Airy patterns in which only one distinct peak can be recognized. Increasing the magnification will increase the size of this sum of Airy patterns at the image plane, but the enlarged image remains a single-peak pattern. In other words, beyond a certain limit, increasing the magnification does not resolve new details. This phenomenon is known as empty magnification.

Microscopyimages

In Figure 5.1, a point source creates a point image at focus. This is, however, a result of geometrical optics which does not take the wave nature of light into account. From a wave-optics perspective, light exhibits the properties of waves, and hence, it undergoes diffraction upon encountering a barrier or a slit. In microscopy, this slit is the finite-sized aperture of the objective. Due to the diffraction process, the image of a point source is a pattern known, after Sir George Airy, as Airy pattern. As shown in Figure 5.12(a), it is composed of a central spot, known as Airy disk (in 2-D), surrounded by multiple diffraction rings. The radius of an Airy pattern, when the image is in its best focus, is: dAiry=0.61λNA,(5.14) where λ=2πk is the wavelength of incident light. It is noteworthy to mention that dAiry in Eq. (5.14) is given in object-space units. Therefore, in image plane, the radius of the Airy disk is M · dAiry, where M is the magnification.

Image

Our processing time for orders may take up to 24-48 hours. Once processed, the estimated delivery time can take anywhere from 1-5 business days depending on the shipping destination.

A microscopic image of a cell culture: The image was acquired using a Nikon Eclipse TE2000U microscope with a bright field objective of magnification 10× and NA = 0.3.

To highlight cellular structures, sections from tissue biopsies and also cytology slides are often dyed or stained. The most common form of stain in histology is a mixture of two substances called hematoxylin and eosin, where the hematoxylin color stains cell nuclei blue and cytoplasm and other cellular structures are dyed in magenta by eosin. Dyes are furthermore used to assess the amount of certain substances, e.g. copper or iron, or biologic structures adhering to certain biomarkers. Besides a main color, often a secondary (or even third) color with strongly different spectral shape is used to dye other cellular compartments and enhance the contrast, a process called counterstaining. In order to prepare a sample, it usually undergoes the process of fixation with formaldehyde and embedding in paraffin wax. The fixation stops a great part of the biologic processes and ensures a proper quality of the slide and a slow degradation process. Embedding in a block of wax is a precondition to cutting thin slices of constant thickness, which are then placed on a microscope slide and covered with a coverslip.

CLE generates video sequences at rates of up to 12 Hz [15] and is clinically used for diagnosis within the gastro-intestinal tract [13]. But its application is not limited there: In the field of neurosurgery, it was shown that a discrimination of brain tumors can be performed on CLE images [9], and it was also successfully used for diagnosis of tumors in the mouth and the upper airways [21, 10].

FREE SHIPPING* on all orders over in Canada !All orders under , the cost of shipping is only ! *Free shipping is not available when the shipping address is a remote location.More >>

Earlier in this text, it was mentioned that ideal phase objects are invisible in bright field microscopy. In fact, as demonstrated in Figure 5.11, the aforementioned statement is correct only under the condition that the image is acquired at focus. This phenomenon, i.e., the possibility to visualize phase objects in bright field microscopy, can be interpreted in the light of the TIE. The contrast obtained by defocusing is numerically represented by the left-hand side of Eq. (5.13). The right-hand side reveals that this contrast is, in fact, phase information. The employment of defocusing to visualize transparent samples in a bright field setup is sometimes called defocusing microscopy.

The numerical aperture is determined by θ the half angle of the maximum light cone and n the refractive index of the medium between lens and specimen.

We perceive the physical world around us using our eyes, but only down to a certain limit. Objects with a diameter smaller than 75 μm cannot be recognized by the naked eye, and due to this reason, they remained undiscovered for the most of human history. Entities which belong to this category include cells (diameter of 10 μm), bacteria (1 μm), viruses (100 nm), molecules (2 nm), and atoms (0.3 nm)1. In fact, the importance of these micro/nano entities in almost every aspect of our life cannot be sufficiently appreciated. Microscopes are the tools which enable us to extend our vision to the micro-world and, despite the prefix micro- in the name, to the nano-world, too. This chapter takes the reader through the basic principles of the most widely-used light microscopy techniques, their advantages, and their inherent limitations. Further microscope types such as scanning tunneling microscopes or atomic force microscopes are beyond the focus of this text. In contrast to the previous chapter, a pinhole projection model is no longer sufficient to explain microscopy. Therefore, we introduce the thin lens model as it provides explanations for at least two functionalities: light-gathering and magnification.

Figure 5.2 shows the result of applying the rules of image formation, i. e., the three rules mentioned above, on the case when the object is within the focal length (d < f). As can be seen in the figure, the rays do not converge. However, the ray extensions intersect at a point T′, called virtual image, from which the rays appear to diverge. In contrast, the images formed when d > f are called real as they are real convergence points of light rays. Virtual images formed by a converging lens are upright while the real images are upside-down. Another important difference is that virtual images cannot be projected on a screen, a camera chip, or any other surface. Nevertheless, they can be perceived by the human eye because the eye behaves as a converging lens which recollects the diverged light rays on the retina.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Illustration of quantitative phase microscopy using the TIE. The figures show a cell culture of adherent ultra-thin L929 cells.

One obvious way of increasing microscopic resolution is using a wavelength which is shorter than the wavelength of visible light. For instance, it is possible to employ ultraviolet (UV) radiation (wavelength in range 300 – 100 nm), soft X-ray (10 – 1 nm), hard X-ray (below 1 nm)3, or electron beams (wavelengths below 5 pm are achievable). Each wavelength range allows us to explore a part of the nano-world, but also imposes a new type of challenges for both microscope manufacturers and users. At the UV wavelengths, glass strongly absorbs light radiation, and thus, in UV microscopy, the lenses are made of UV-transparent materials such as quartz. Moreover, at the wavelengths of X-ray radiation, the refractive index of solid substances is very close to the refractive index of air. Since the light-focusing performed by a visible-light lens is inherently a refraction process, these lenses cannot be used to focus X-ray beams. In fact, in X-ray microscopy, expensive and impractical devices which are based on diffraction instead of refraction are employed to replace the typical optical lenses. Electron microscopy utilizes electromagnetic lenses and cathode rays in order to achieve a drastic improvement in resolution compared to light microscopy. Unlike ultraviolet and X-ray radiation, cathode rays, being electron beams of measurable mass and negative charge, do not belong to the electromagnetic radiation. Therefore, the photon-wave duality, and hence the conception of wavelength, are not directly applicable. One of the major contributions which led to the development of electron microscopy is the theory of Louis de Broglie who stated in his PhD thesis that the particle-wave duality is also valid for matter. According to de Broglie, the wavelength of an electron of mass meand speed ceis given by: λe=ρme⋅ce,(5.15) where ρ is Planck constant. As an alternative for reflection in optical lenses, in electromagnetic lenses, deflection of electron beams by magnetic fields was exploited to focus the beams. In an electron microscope, similar to a cathode-ray tube, an electron beam is emitted into vacuum by heating the cathode and then accelerated by applying a voltage between the cathode and the anode. The speed of the electrons, and hence the wavelength (cf. Eq. (5.15)), can be controlled by varying the voltage. The first electron microscopes were very similar from a schematic point of view to bright field microscopes. The acquired image is based on the specimen absorption of electrons when transmitted into the sample, and hence, they were given the name transmission electron microscopes. A resolution as high as 0.2 nm is achieved by the transmission electron microscopes. A major limitation of this scheme, however, is that only very thin samples can be imaged. Scanning electron microscopy was developed to cope with this difficulty. To do so, a primary electron beam is focused by an electromagnetic lens on a very small part of the specimen. This primary beam incites the emission of a secondary electron beam. The intensity of this secondary beam is recorded. Afterwards, the primary beam is moved to another part of the specimen, and the same process is applied. This is repeated so that the entire specimen is scanned in a raster pattern and the final image is obtained from the recorded values of the secondary beam intensities. Scanning electron microscopy can be used to image thick samples, even though it captures only the surface details. In addition, the secondary beam is accompanied with X-ray emission characteristic to the material which emitted it. Therefore, it is employed to reveal the chemical composition of specimens. Both scanning electron microscopy and transmission electron microscopes work in a vacuum. Consequently, they can be used only for dead specimens. From this perspective, X-ray and traditional light microscopy are preferred over electron microscopy. Although X-ray and electron microscopes provide a considerable improvement of resolution over light microscopes, they are extremely expensive, require large hardware, and mostly involve complicated sample preparation.

Our processing time for orders may take up to 24-48 hours. Once processed, the estimated delivery time can take anywhere from 1-5 business days depending on the shipping destination.