How to Use a Telephoto Lens for Landscape Photography - tele objective lens
There are various types of image sensors, with the most common being CCD (Charge Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor). CCD and CMOS sensors work differently in how they capture and process light, resulting in differences in image quality, such as color reproduction, noise, and dynamic range.
By rotating the sensor and bringing the photodetector silicon layer to the front (from the 'back'), light has less distance to travel and there is less scattering, resulting in a much higher QE of >95%. While back-illumination was achieved earlier with some CCDs and most EMCCDs, it took longer for CMOS due to the complex electronics involved, and the specific thickness of silicon required to capture different wavelengths of light. Either way, the result is a good 15-20% QE increase at peak, and a 10-15% QE increase out to >1000 nm, doubling the sensitivity in these regions. The lack of microlenses also unlocked a new QE region from 200-400, great for UV imaging.
Scientific imaging technologies have continued to advance from CCD, to EMCCD, sCMOS, and back-illuminated sCMOS, in order to deliver the best speed, sensitivity, resolution, and field of view for your sample on your application. Choosing the most suitable camera technology for your imaging system can improve every aspect of your experiments and allow you to be quantitative in your research. While CCD and EMCCD technologies enjoyed popularity in scientific imaging, over the past few decades sCMOS technology has come to the fore as an ideal solution for imaging in life sciences.
Focusing screens are separate optical components in a camera's viewfinder system that aid in precise manual focusing. While they don't directly impact the performance of image sensors, they can play a role in ensuring accurate focus, which ultimately affects the sharpness and quality of the final image. However, with modern autofocus systems, the significance of focusing screens has diminished.
4types of camera
In conclusion, there are various sensor sizes and types available in digital cameras today. Understanding the differences amongst them can help you choose the right camera for your needs and budget. Regardless of the type and size, all camera sensors share the common goal of capturing and converting light into digital images for us to cherish.
The heart of the camera is the sensor, and the steps involved in generating an image from photons to electrons to grey levels. For information on how an image is made, see our article of the same name. This article discusses the different camera sensor types and their specifications, including:
Micro Four Thirds (MFT): This sensor measures around 17.3mm x 13.0mm and is mainly used in compact mirrorless cameras. MFT sensors offer a balance between portability and image quality.
Fuji is renowned for its unique X-Trans sensor, a variation of CMOS, found in their interchangeable lens cameras, as explained here. This design aims to deliver better color reproduction and reduce moiré patterns.
EM gain decay or ageing is a phenomenon that is not fully understood, but essentially involves charge building up in the silicon sensor between the EM electrode and photodetector. This build-up of charge reduces the effect of EM gain, hence EM gain decay. The greater the initial signal intensity and the higher the EM gain, the faster the EM gain will decay. Using an EM gain of 1000x on a large signal would quickly result in EM gain decay. This results in the EM gain not being the same each time, leading to a lack of reproducibility in experiments, limiting the usefulness of the camera as a quantitative imaging tool. EMCCDs essentially have limited lifespans and require regular calibration, leading to these cameras needing to be used in a certain way, limiting the EM gain that can be used in an experiment without damaging the camera. When a camera has been purchased and will be used daily in a research lab, it can be disappointing to learn that the camera will become less and less reliable over time.
To reconstruct a full-color image from the incomplete information gathered by the Bayer filter, a process called demosaicing is used. Demosaicing reconstructs the full color image based on the Bayer pattern's measurements of red, green, and blue light at each pixel. By estimating missing color information and considering the data from neighboring pixels, demosaicing algorithms produce a high-quality color image.
Generative AI in Photoshop is revolutionizing design and photo editing by providing tools that streamline workflows and enhance creativity. This technology enables quick removal of unwanted objects...
EMCCDs are also faster than CCDs. In CCDs, electrons are moved around the sensor at speeds well below the maximum possible speed, because the faster the electrons are shuttled about, the greater the read noise. Read noise is a fixed +/- value on every signal, if a CCD has a read noise of ±5 electrons and detects a signal of 10 electrons, it could be read out at anywhere between 5-15 electrons depending on the read noise. This has a big impact on sensitivity and speed, as CCDs move electrons slower in order to reduce read noise. However, with an EMCCD you can just multiply your signal up until the read noise has a negligible effect. This means that EMCCDs can move signal around at maximum speed, resulting in huge read noise values from 60-80 electrons, but signals are often multiplied by hundreds of times, meaning that the read noise impact is lessened. In this manner, EMCCDs can operate at much higher speeds than CCDs, achieving around 30-100 fps across the full-frame. This is only possible due to the EM Gain aspect of EMCCDs.
APS-C Sensor: APS-C sensors are smaller than full-frame sensors, measuring approximately 22.2mm x 14.8mm. Cameras with APS-C sensors are more compact and affordable compared to full-frame cameras, making them a popular choice for amateurs and professionals alike. However, they have a crop factor, which means the effective focal length will be different when compared to a full-frame sensor.
Lastly, Samsung, while no longer producing new camera models, was known for its NX series of mirrorless cameras using APS-C sized sensors. They also manufactured and supplied sensors for various industries, including smartphones, before they exited the digital camera market.
The main issues with CCDs are their lack of speed and sensitivity, making it a challenge to perform low-light imaging or to capture dynamic moving samples.
For the interline-transfer CCD, a portion of each pixel is masked and not exposed to light. Upon exposure, the electron signal is shifted into this masked portion, and then sent to the readout register as normal. Similarly to the frame-transfer sensor, this helps increase the speed, as the exposed area can generate a new image while the original image is processed. However, each pixel in this sensor is smaller (as a portion is masked), and this decreases the sensitivity as fewer photons can be detected by smaller pixels. These sensors often come paired with microlenses to better direct light and improve the QE.
Both CCD and CMOS sensors can be classified by their chroma type (color or monochromatic) and the shutter type (global or rolling shutter). Additionally, they can be described by the resolution, frame rate, pixel size, and sensor format. Finally, most image sensors are made using silicon as the substrate material, which enables the efficient conversion of light into electrical signals.
Advancements in autofocus technology have also played a crucial role in the evolution of image sensors. Modern cameras and smartphones now use advanced algorithms and technologies such as phase-detection and contrast-detection autofocus to quickly and accurately focus on subjects. As a result, capturing sharp images is easier and more efficient than ever before.
In Fig.8 we can see the bias of a split sensor camera, showing a horizontal line separating the two halves of the sensor, along with the other horizontal scrolling lines. This is due to each sensor half never being exactly the same due to noise and fluctuations. This effect is exacerbated when 100 image frames are averaged, as seen in the lower image. Here the sensor split is also clear, as are vertical columns across the image. This is fixed pattern column noise and is again due to the ADC pairs of the sensor. This noise can interfere with signal in low-light conditions.
The first step for a sensor is the conversion of photons of light into electrons (known as photoelectrons). The efficiency of this conversion is known as the quantum efficiency (QE) and is shown as a percentage.
While higher megapixel counts can lead to better image quality, it's essential to also consider the tradeoff of noise. Noise refers to random signals or unwanted visual distortions in a photograph. As pixel density increases, pixels become smaller, and their ability to capture light decreases, sometimes resulting in higher noise levels. Balancing the number of pixels and noise reduction is a critical part of achieving sharp images.
Sony is a market leader in sensor development, manufacturing advanced CMOS sensors for their own cameras and supplying them to other companies. They offer sensors in various sizes, including full-frame, APS-C, and even smaller for compact and smartphone cameras.
Types of camerawith pictures
1-inch Sensor: Found in high-end compact cameras, these sensors measure 12.8mm x 9.6mm. They provide better image quality than smaller point-and-shoot cameras but are not as large as MFT or APS-C sensors.
While MOS and CMOS technology has existed since before CCD (~1950's), only in 2009 did CMOS cameras become quantitative enough to be sufficient for scientific imaging, hence why CMOS cameras for science can be referred to as scientific CMOS or sCMOS.
CCD pixels are also typically quite small (such as ~4 µm) meaning that while these sensors can achieve a high resolution, they lack sensitivity, as a larger pixel can collect more photons. This limits signal collection and is compounded by the limited QE of front-illuminated CCDs, which often only reaches 75% at maximum.
In this manner, electrons can be moved anywhere on a sensor, and are typically moved to an area where they can be amplified and converted into a digital signal, in order to be displayed as an image. However, this process occurs differently in each type of camera sensor.
In the world of digital cameras and smartphones, two common types of image sensors are the CCD (Charge-Coupled Device) sensors and the CMOS (Complementary Metal-Oxide-Semiconductor) sensors. While CCD sensors provide high-quality images with low noise, CMOS sensors have become increasingly popular due to their lower power consumption and faster readout speeds.
Essentially, there are very few data readout channels for a CCD, meaning the data processing is slowed. Most CCDs operate at between 1-20 frames per second, as a CCD is a serial device and can only read the electron charge packets one at a time. Imagine a bucket brigade, where electrons can only be passed from area to area one at a time, or a theatre with only one exit but several million seats.
CMOS sensors, on the other hand, are increasingly popular due to their energy efficiency, faster processing, and lower manufacturing costs. Each pixel on a CMOS sensor includes its own amplification and readout circuits, allowing for faster data output and more parallel processing. This results in a higher frame rate and improved performance in low light conditions.
Although the Bayer filter has been widely adopted, it's crucial to understand that image quality depends on both the sensor and the image processor. A robust demosaicing algorithm and an efficient image processor can help enhance the final image. While there might be some challenges involved, such as noise reduction and accurate color reproduction, the combination of Bayer filters and demosaicing has been proven to deliver great results in digital imaging.
These early sCMOS sensors were front-illuminated and therefore had a limited QE (70-80%), further impacting their sensitivity.
Back-illumination allows for a large increase in camera QE across wavelengths from UV to IR, due to the way that light can access the camera sensor. Figure 9 highlights the differences between a front-illuminated and back-illuminated camera sensor.
10types of camera
Despite the advantages of electron multiplication, it introduces a lot of complexity to the camera and leads to several major downsides. The main technological issues are EM Gain Decay, EM Gain Stability and Excess Noise Factor.
Image sensors are the heart of digital cameras, playing a crucial role in capturing and processing the light that enters a camera and transforming it into a digital image. These essential components have a significant impact on the quality and performance of cameras, with various types and sizes available to cater to different photographic needs and budgets.
In a CCD, after exposure to light and conversion of photons to photoelectrons, the electrons are moved down the sensor row by row until they reach an area that isn't exposed to light, the readout register. Once moved into the readout register, photoelectrons are moved off one by one into the output node. In this node they are amplified into a readable voltage, converted into a digital grey level using the analogue to digital converter (ADC) and sent to the computer via the imaging software.
Leica, a luxury camera brand, focuses on high-quality materials and craftsmanship. They use larger full-frame sensors in their M series rangefinder cameras and also have a partnership with Panasonic in developing compact and mirrorless cameras with Micro Four Thirds sensors.
Image sensors are the heart of digital cameras, responsible for capturing light and converting it into an electrical signal that can be processed to create a digital image. There are two main types of image sensors: CCD (Charge-Coupled Device) and CMOS (Complementary Metal Oxide Semiconductor).
In recent years, advancements in image sensor technology have paved the way for photographers to achieve stunning image quality without compromising on portability or convenience. As we delve into the world of image sensors, we will explore how different types and sizes can affect a camera's output and discover the role of pixels in determining image quality.
Some early sCMOS, in an effort to run at a higher speed, featured a split sensor, where each half of the sCMOS sensor had its own set of ADCs and the camera image at speeds up to 100 fps. However, this split caused patterns and artifacts in the camera bias, which would be clearly visible in low-light conditions and would interfere with the signal, as seen in Figure 8.
EMCCDs work in a very similar way to frame-transfer CCDs, where electrons move from the image array to the masked array, then onto the readout register. At this point the main difference emerges: the EM Gain register. EMCCDs use a process called impact ionisation to force extra electrons out of the silicon sensor, therefore multiplying the signal. This EM process occurs step-by-step, meaning users can choose a value between 1-1000 and have their signal be multiplied that many times in the EM Gain register. If an EMCCD detects a signal of 5 electrons and has an EM Gain set to 200, the final signal that goes into the output node will be 1000 electrons. This allows EMCCDs to detect extremely small signals, as they can be multiplied up above the noise floor as many times as a user desires.
This combination of large pixels, back-illumination and electron multiplication makes EMCCDs extremely sensitive, far more so than CCDs.
Types of cameraand their names
When it comes to digital cameras, the sensor is the heart of the system. It plays a crucial role in capturing the light coming through the lens and forming an image. There are several sensor sizes and types available in the market, each with its own advantages and disadvantages. In this section, we will present a brief overview of some common sensor sizes and types.
Overall, while CCDs were the first digital cameras, for scientific imaging purposes in the modern day they are lacking in speed, sensitivity and field of view.
In digital photography, the image sensor is the foundation of capturing a high-quality photograph. The sensor is composed of millions of pixels that absorb light particles and transform them into electrical signals. These signals are then processed into an image viewable on a device. As the number of pixels, often measured in megapixels, increase, so does the potential for more detail and higher resolution in the photograph.
In addition, CCDs have a small full-well capacity, meaning that the number of electrons that can be stored in each pixel is limited. If a pixel can only store 200 electrons, receiving a signal of >200 electrons leads to saturation, where a pixel becomes full and displayed the brightest signal, and blooming, where the pixel overflows and the excess signal is smeared down the sensor as the electrons are moved to the readout register.
Types of camerafor beginners
Olympus and Panasonic both utilize the Micro Four Thirds system, featuring smaller sensors with a 2x crop factor compared to full-frame. This system allows for compact and lightweight camera bodies and lenses, catering to photographers and videographers prioritizing portability.
BI sCMOS have a much greater signal collection ability than FI sCMOS due to the increase in QE and the elimination of patterns/artifacts with a clean background. Along with the low read noise, BI sCMOS is able to match and outperform EMCCD in sensitivity, as well as already featuring much higher speed, resolution, and larger field of view.
Sensor size plays a significant role in determining image quality. Larger sensors typically capture more light and offer higher resolutions than smaller ones. This leads to improved image quality with less noise and better dynamic range. However, larger sensors may require more expensive lenses and can result in larger camera bodies. Sensor size and image quality are important factors to consider when choosing a digital camera.
All the sensor types discussed here operate based on the fact that all electrons have a negative charge (the electron symbol being e-). This means that electrons can be attracted using a positive voltage, granting the ability to move electrons around a sensor by applying a voltage to certain areas of the sensor, as seen in Figure 1.
While an EMCCD can multiply signal far above the reaches of read noise, these cameras are subject to other sources of noise, unique to EMCCDs. The number of photons a camera detects is not the same every second, as photons typically fall like rain rather than arrive at the sensor in regimented rows. This disparity between measurements is called photon shot noise. Photon shot noise and other sources of noise all exist in the signal as soon as it arrives on the sensor, and these noise sources are all multiplied up along with the signal, resulting in the Excess Noise Factor. The combination of random photon arrival and random EM multiplication leads to extra sources of error and noise, with all sources of noise (predominantly photon shot noise) being multiplied by a factor of 1.4x. While an EMCCD may eliminate read noise, it introduces its own sources of noise, impacting the signal-to-noise ratio and the ability of the camera to be sensitive.
CCD sensors have been used in digital cameras for many years and are known for their excellent image quality and low noise levels. Each pixel on a CCD sensor is represented by a capacitive circuit that holds an electrical charge proportional to the amount of light it receives. After capturing the light, the charges are transferred through a series of registers and amplifiers to be converted into digital values.
Finally, the large pixels of an EMCCD lead to these cameras having a lower resolution than CCDs; EMCCDs have a small field of view due to their small sensors; and even today (20 years later) EMCCDs are still the most expensive format of scientific camera.
In conclusion, we see that pixels, megapixels, and their properties play a significant role in determining image quality. Factors like dynamic range and noise should also be considered to ensure you capture the best possible photographs with your camera sensor. Armed with this knowledge, you can make informed decisions on choosing a camera that best suits your photography needs.
This combination of front-illumination, split sensors, patterns/artifacts, and smaller pixels all led to early sCMOS lacking in sensitivity.
When discussing camera sensors, we can look at some of the major camera brands and their respective sensor types. In this section, we'll briefly cover Canon, Nikon, Sony, Fuji, Olympus, Leica, Panasonic, and Samsung.
CCDs were the first digital cameras, being available since the 1970s for scientific imaging. CCD have enjoyed active use for a number of decades and were well suited to high-light applications such as cell documentation or imaging fixed samples. However, this technology was lacking in terms of sensitivity and speed, limiting the available samples that could be imaged at acceptable levels.
Types of cameraPDF
In addition, the EM gain process itself is not stable, different fluctuations can occur. One such example is EM gain being temperature-dependent, in order for EMCCDs to have reliable EM gain they typically operate at temperates from -60 ºC to -80 ºC, meaning they require extensive forced-air or liquid cooling. This all adds to the camera complexity and cost, especially if a liquid cooling rig needs to be installed with the camera.
Every stage that light has to travel through will scatter some light, meaning that the QE of front-illuminated cameras is often limited from 50-80%, even with microlenses specifically to focus light onto each pixel. Due to the additional electronics of CMOS sensors (miniaturized capacitor and amplifier on each pixel), there can be even more scattering.
Types ofcameras for film
In 2016 Photometrics released the first back-illuminated sCMOS camera, the Prime 95B. Back-illuminated (BI) sCMOS cameras greatly improve on sensitivity compared to early front-illuminated sCMOS, while retaining all the other CMOS advantages such as high speed, large field of view. The combination of a much higher QE due to back-illuminated (up to 95%, hence the name of the Prime 95B), the single sensor (no split), more varied pixel sizes, and a cleaner background, BI sCMOS is the all-in-one imaging solution.
We offer visual artists a community of resources filled with the top advice, workflows, tips, tricks, advice, and business mentorship so that you can become your next level of PRO.
In summary, image sensor technology has come a long way in recent years, with advancements like BSI, Stacked CMOS sensors, and improved autofocus systems making digital cameras and smartphones more capable and versatile than ever before. We are excited to see what the future holds for this essential component of modern photography.
Canon is known for its full-frame and APS-C-sized sensors. A popular example is the Canon EOS 5D, which features a full-frame CMOS sensor. They also utilize advanced technology like Dual Gain Output (DGO) and Stacked CMOS sensors, providing improved low light performance and faster readout speeds as explained in this article.
Quantitative scientific cameras are vital for sensitive, fast imaging of a variety of samples for a variety of applications. Camera technologies have advanced over time, from the earliest cameras to truly modern camera technologies, which can push the envelope of what is possible in scientific imaging and allow us to see the previously unseen.
CMOS technology is different to CCD and EMCCD, the main factor being parallelization, CMOS sensors operate in parallel and allow for much higher speeds.
Another critical aspect of image quality is dynamic range. This factor represents the ability of a camera sensor to capture a wide range of light intensities from the darkest shadows to the brightest highlights. A larger dynamic range will result in photographs reflecting more natural lighting conditions and stronger color accuracy.
In recent years, Back-Side Illumination (BSI) technology has significantly improved image sensor performance. BSI sensors allow for increased light sensitivity and reduced noise, especially in low light conditions. This technology can be found in both digital cameras and smartphones.
In extreme cases (such as daylight illumination of a scientific camera), there is a charge overload in the output node, causing the output amplification chain to collapse, resulting in a zero (completely dark) image.
While EMCCDs greatly improved on the speed and sensitivity of CCDs, they brought their own issues and continued to limit the amount of information that could be obtained from the microscope.
The Bayer filter works by utilizing a Bayer pattern, which consists of red, green, and blue (often abbreviated as RGB) color filters. The arrangement of these color filters is: 50% green, 25% red, and 25% blue. This replicates the sensitivity of the human eye, which is more sensitive to green light. The pattern helps the image processor gather color information for each pixel in the sensor, but it results in an incomplete color representation.
EMCCDs first emerged onto the scientific imaging scene in 2000 with the Cascade 650 from Photometrics. EMCCDs offered faster and more sensitive imaging then CCDs, useful for low-light imaging or even photon counting.
Types ofcameras brands
When choosing a digital camera, it is important to consider the specific strengths and weaknesses of the image sensor being utilized. While CCDs may excel in overall image quality, CMOS sensors can provide faster performance and greater flexibility in demanding lighting situations. Overall, understanding the underlying technology of digital camera sensors can help you make an informed decision about the optimal camera for your needs.
The number of electrons is linearly proportional to the number of photons, allowing the camera to be quantitative. The design seen in Fig.2 is known as a full-frame CCD sensor, but there are other designs known as frame-transfer CCD and interline-transfer CCD that are shown in Fig.3.
Nikon primarily uses CMOS sensors, in both full-frame and APS-C formats. They recently introduced the Z9, which features a revolutionary stacked sensor design focusing on video capabilities, highlighted in this source.
Full-Frame Sensor: Full-frame sensors are the largest commonly used sensors, measuring 36mm x 24mm. They offer the best image quality, low-light performance, and dynamic range. Full-frame sensors are primarily found in high-end DSLRs and mirrorless cameras. With a full-frame sensor, you get a true field of view and incredible details in the captured images.
This order also shows the chronological order of the introduction of these sensor types, we will go through these one at a time, in a journey through the history of scientific imaging.
EMCCDs achieved this in a number of ways. The cameras are back-illuminated (increasing the QE to ~90%) and have very large pixels (16-24 µm), both of which greatly increase the sensitivity. The most significant addition, however, is the EM in EMCCD: electron multiplication.
In addition, CMOS sensors had a large full well capacity, meaning they had a large dynamic range and could simultaneously image dark signals and bright signals, not subject to saturation or blooming like with a CCD.
CMOS sensors have also been adopted by the commercial imaging industry, meaning that nearly every smartphone camera, digital camera, or imaging device uses a CMOS sensor. This makes these sensors easier and cheaper to manufacture, allowing sCMOS cameras to feature large sensors and have much larger fields of view than CCD/EMCCD, to the point where some sCMOS cameras can capture all the information from the microscope.
In a CMOS sensor there are miniaturized electronics on every single pixel, namely a capacitor and amplifier. This means that a photon is converted to an electron by the pixel, and then the electron is immediately converted to a readable voltage while still on the pixel. In addition, there is an ADC for every single column, meaning that each ADC has far less data to read out than a CCD/EMCCD ADC, which has to read out the entire sensor. This combination allows CMOS sensors to work in parallel, and process data much faster than CCD/EMCCD technologies. By moving electrons much slower than the potential max speed, CMOS sensors also have a much lower read noise than CCD/EMCCD, allowing them to perform low-light imaging and work with weak fluorescence or live cells.
In modern digital cameras, we find an important color reproduction element known as the Bayer filter. Named after its inventor Bryce Bayer, this microfilter overlay allows photosensors, which usually only record light intensity, to record light wavelength as well. The Bayer filter is actually a color filter array (CFA), and it is the most common type employed in digital cameras today.
Bit depth refers to the amount of color and tonal information captured by an image sensor. A higher bit depth allows for a greater dynamic range, capturing more details in both shadows and highlights. This results in smoother color gradients and improved overall image quality.
Early sCMOS cameras featured much higher speeds and larger fields of view than CCD/EMCCD, and with a range of pixel sizes, there were CMOS cameras that imaged at very high resolution, especially compared to EMCCD. However, the large pixel and electron multiplication of EMCCDs meant that early sCMOS cameras couldn't rival EMCCD when it came to sensitivity. When it came to extreme low-light imaging or the need for sensitivity, EMCCD still had the edge.
Another major innovation is the Stacked CMOS sensor, which offers increased data processing speeds and a smaller overall size. This type of sensor has multiple layers, with the top layer dedicated to pixel sensing and the bottom layer responsible for processing the data. This design allows for faster readout and improved performance in high-speed photography situations.
In a frame-transfer CCD the sensor is divided into two: the image array (where light from the sample hits the sensor) and the storage array (where signal is temporarily stored before readout). The storage array is not exposed to light, so when electrons are moved to this array, a second image can be exposed on the image array while the first image is processed from the storage array. The advantage is that a frame-transfer sensor can operate at greater speeds than a full-frame sensor, but the sensor design is more complex and requires a larger sensor (to accommodate the storage array), or the sensor is smaller as a portion is made into a storage array.
The latest generative AI advancements in Photoshop, powered by Adobe Firefly Image 3 Model, revolutionize digital art by offering features like Text to Image and Generative Fill, enh...
A digital camera's sensor can affect the image processing capabilities of the camera. For example, some sensors incorporate advanced technologies that enable faster processing or provide better low-light performance. Additionally, the choice of a CMOS or CCD sensor can influence the camera's image processing speed and power consumption.
Image sensors are the heart of digital cameras, responsible for capturing light and converting it into electronic signals to form a digital image. They play a crucial role in determining the overall image quality and resolution. Digital camera sensors vary in size and type, with each providing unique advantages and drawbacks.
Finally, CCD sensors are typically quite small, with an 11-16 mm diagonal, which limits the field of view that can be displayed on the camera and means that not all of the information from the microscope can be captured by the camera.