While there are a number of different types of camera sensor, by far the most prevalent is the complementary metal-oxide semiconductor (CMOS) sensor, which can be found inside the vast majority of modern digital cameras.

As covered above, a single pixel can only record a single value. But if you zoom into a digital image, each individual pixel can contain a mixture of colors, rather than just the red, green, or blue allowed by the color filter array.

By stacking them in this way, the distance the pixel values have to travel is drastically reduced, resulting in much faster processing speeds.

CMOS sensorvs CCD

Canon produces a rectilinear 11-24 mm\text{11-24 mm}11-24 mm lens — which is insane — that allows capturing sharp wide-angle images. How wide? Let's mount the lens on a Canon EOS 550D and calculate this camera field of view!

That is an extremely wide angle of view: it covers 170017001700 square degrees! However, you need almost 20 of these fields to capture a true 260°260\degree260° picture.

The sensor size of your camera significantly affects the quality of your pictures. A camera with a larger sensor will give you a wider field of view for the same lens, maintaining the same magnification: your subject will be surrounded by more background. The advantages are relative, though: when printing the photograph in the same format, a larger field of view will necessarily translate to a lower magnification.

With the move to back-side illumination enabling much higher resolutions and stacked sensors increasing readout speeds so significantly, recent developments amount to nothing short of a revolution in CMOS camera sensor technology.

Here is the lack of absoluteness of the field of view: its value varies with the distance between the camera and the subject. That's why in the same angle of view, you can fit both the Moon and a passenger airplane.

I agree to the terms of FUJIFILM North America Corporation’s privacy policy and terms of use. If I am a California resident, I also agree to the terms outlined in the California section of the privacy policy. I understand that I can withdraw this consent at any time and that I can contact Fujifilm at FUJIFILM North America Corp 200 Summit Lake Drive Valhalla, NY 10595, Attn: FNAC Chief Privacy Officer, or by phone at 800-800-3854.

The image processor is able to read these digital signals collectively and translate them into an image, because each pixel is assigned an individual value, depending on the intensity of light it was exposed to.

This is done automatically by the camera’s built-in processor, which then turns it into a viewable image file format such as JPEG or HEIF.

File types such as JPEG and HEIF are designed to make image files easily portable, so significant compression takes place to achieve the smallest possible file sizes.

Using angles, it is 26.5° × 17.7°, vertical and horizontal angle of view, respectively. To calculate these values, input them in the angle of view formula:

Like any technology, camera sensors have come a long way in the past decade alone, and look to continue this development into the future.

Physicist holding a 1st class degree and a member of the Institute of Physics. Creator of the UK vaccine queue calculator, and featured in many publications, including The Sun, Daily Mail, Express, and Independent. Tenacious in researching answers to questions and has an affection for coding. Hobbies include cycling, walking, and birdwatching. You can find him on Twitter @OmniSteve. See full profile

The reason there is a higher frequency of green filters is because the filter array has been designed to mimic the human eye’s higher sensitivity to green light.

On the other hand, let's calculate the camera field of view for a typical telephoto lens set-up. We keep the same camera but mount a 200 mm200\ \text{mm}200 mm lens. Insert the values in the appropriate fields of our camera field of view calculator and calculate the angles of view in this case:

Do you want to learn more about the fundamental of photography with a slight technical twist? We made the right calculators for you: the aspect ratio calculator and the crop factor calculator!

Every vertical and horizontal line in an X-Trans CMOS sensor includes a combination of red, green, and blue pixels, while every diagonal line includes at least one green pixel. This helps the sensor reproduce the most accurate color.

CMOS sensor

Refine your photographic knowledge with our dedicated tools like the depth of field calculator and our magnification of a lens calculator. Cheese! 📸

A camera captures a rectangular portion of the real world, projecting it onto its sensor. We identified two possible ways to measure the size of that portion:

The basic concept underlying the entire matter is that cameras can capture a single, defined portion of the real-world at once. How significant this portion depends on the camera set-up, particularly on the type of lens and camera body used by the photographer.

To minimize the amount of light bouncing off this circuitry, a microlens is placed on the top of each pixel to direct the light into the photodiode and maximize the number of photons gathered.

You already know what the iii means and ddd is the distance at which the field of view is measured. Remember to use the correct dimension of your sensor: you want to use its length to calculate the horizontal field of view of your camera, not the vertical one... unless you are taking a portrait picture.

⚠️ There is quite a bit of confusion online (and not only) on the matter, with various definitions and interpretations. Here we gave the one that makes the most sense for us, but feel free to disagree and let us know!

Sometimes, all you need is a reason to pick up your camera. Here are some top photography project ideas to ignite your creative flame

CMOS sensorSamsung

The two first ones define the width and height of the rectangle corresponding to our sensor. Notice that these quantities are absolute: you can place the "sphere" as far as you like (even at an infinite distance), and the angle of view will still be the same.

🙋 To find the size of your sensor, search on Google "[your camera model] sensor size": you will easily find the correct values!

Get FREE weekly photo lessons direct to your inbox with the FUJIFILM Photo School. Get to grips with the basics, or dive deeper into your craft! Sign up now, your future self will thank you!

With our tool, you will be able to calculate the camera's field of view (any camera!), and not only that, there is more to it. Keep reading to discover:

Learn more by exploring the rest of our Fundamentals of Photography series, or browse all the content on Exposure Center for education, inspiration, and insight from the world of photography.

By removing the obstruction caused by the circuitry, a greater surface area can be exposed to light, allowing the sensor to gather more photons and subsequently maximize its efficiency.

For example, the X-Trans CMOS 5 HS stacked sensor found in FUJIFILM X-H2S enjoys four times the reading speed of its predecessor and 33 times the reading speed of the original X-Trans CMOS sensor featured in X-Pro1.

At the most basic level, a camera sensor is a solid-state device that absorbs particles of light (photons) through millions of light-sensitive pixels and converts them into electrical signals. These electrical signals are then interpreted by a computer chip, which uses them to produce a digital image.

What can you capture with this lens? For a distance d=200 md=200\ \text{m}d=200 m, the angle of view converts into the respective linear field of views:

In the case of the original front-side illuminated (FSI) sensor design, all the wiring and circuitry necessary for storing, amplifying, and transferring pixel values runs along the borders between each pixel. This means light has to travel through the gaps to reach the photodiode beneath.

What isCMOS sensorin camera

As the name suggests, a RAW file contains the raw image data before any demosaicing has taken place. This allows photographers to demosaic images using external software such as Capture One.

Physicist by education, scientist by vocation. He holds a master’s degree in complex systems physics with a specialization in quantum technologies. If he is not reading something, he is outdoors trying to enjoy every bit of nature around him. He uses his memory as an advantage, and everything you will read from him contains at least one “I read about this five years ago” moment! See full profile

The Canon EOS 550D has a sensor size 22.3×14.9 mm22.3\times14.9\ \text{mm}22.3×14.9 mm, which allows us to calculate both the horizontal and the vertical angle of view. Let's not be extreme and use the 24 mm24\ \text{mm}24 mm focal length.

Digital cameras are everywhere – from high-end professional equipment used by the media to everyday smartphone cameras, webcams, and even doorbells. At the heart of every single one is a digital camera sensor, also known as an image sensor. Without this vital piece of technology, digital cameras as we know them today simply would not exist.

Made up of approximately 55% green, 22.5% red, and 22.5% blue filters, it creates similar proportions of red, green, and blue pixels as the Bayer array. But it uses a more complicated 6×6 arrangement, comprised of differing 3×3 patterns.

Figure 5: Cross section of a front-side illuminated vs back-side illuminated CMOS sensor. For illustrative purposes only.

Take the diverging lines from the sphere's center to the corner of the scene, and stop them at a certain distance ddd. Now, draw the corresponding rectangle: you will obtain a set of measurements we call the field of view at a distance ddd.

As for the angle of view, you can identify three quantities associated with the field of view: a horizontal length, a vertical length, and a diagonal length. To calculate the field of view fovi\text{fov}_ifovi​ of a camera in each of the three possible directions, use the following formula:

Sensor resolutions have risen dramatically since the 16-megapixel X-Trans CMOS sensor in X-Pro1, making it less likely for moiré to occur. As a result, optical low-pass filters have all but disappeared – though increased image sharpness is not the only potential advantage of the X-Trans color filter array.

When we use angles to define the dimensions of a picture, we talk of the angle of view. The angle of view is pretty easy to visualize: your camera lies at the center of a sphere, and connecting the angles of the scene you are capturing to its center gives you a set of three angles:

🙋 You can use our camera field of view calculator to find the values of the angles and fields of view and to calculate the needed focal length of the lens you need to mount to obtain a particular field of view. We locked the variables associated with the sensor's size: it's unlikely you will change it instead of the lens!

cmos sensorvs full-frame

Additionally, the less uniform pattern is closer to the random arrangement of silver particles on analog photographic film, which contributes to Fujifilm’s much-loved film-like look.

We define both quantities for three spatial directions, which allows us to calculate a vertical, diagonal, and horizontal field of view for a camera set-up, alongside the respective angles of view.

A color filter array is a pattern of individual red, green, and blue color filters arranged in a grid – one for every pixel. These filters sit on top of the photosites and ensure that each individual pixel is exposed to only red, green, or blue light.

It's impossible to define a typical field of view of a camera: it depends on the lens you are mounting at the moment. However, we can give you some practical examples.

The solid angle covered by this set-up is about 272727 square degrees for the vertical one! If you want to take a 360°360\degree360° picture, you'll need more than 150015001500 shots! But at a distance of 200 m200\ \text{m}200 m you would be able to picture an area of 45 m×30 m45\ \text{m}\times 30\ \text{m}45 m×30 m: good enough to take some exciting wildlife pictures without disturbing anyone!

That is more than enough to fit the whole Colosseum in Rome in a single picture. And all of this standing barely more than the diameter of the arena itself away!

Think about the Moon for a second more. You managed to fit it into your picture — your sensor! We are talking of a 3,500 km3,500\ \text{km}3,500 km body. You can take a picture of a plane passing between you and our satellite with some luck and good timing. A B747 flying at 6.5 km6.5\ \text{km}6.5 km would almost eclipse the Moon, fitting perfectly in the picture together. Does it mean the jumbo jet is really that jumbo, or that the field of view can be a relative concept, too?

CMOSimagesensor

A CMOS sensor is made up of a grid of millions of tiny pixels. Each pixel is an individual photosite, often called a well (see Figure 1).

📷 Cameras are our way to create memories — instants preserved forever — from what we can see. However, their electronic eyes have some limitations when it comes to how much of those memories they record. We may need to think beforehand about what we want to include in our pictures, and many photographers find it helpful to know their cameras' field of view.

The answer is a process called demosaicing, in which a demosaicing algorithm predicts the missing color values for an individual pixel based on the strength of the color recorded by the pixels that surround it.

During the compression process, a large amount of tonal and color information read by the sensor is lost. Less information means lower quality and, in turn, restricted freedom to edit.

When photons enter the photosite, they hit a light-sensitive semi-conductor diode, or photodiode, and are converted into an electrical current that directly corresponds to the intensity of the light detected.

One way to prevent moiré is by adding an optical low-pass filter to the sensor. Another is to use a different color filter array.

As a result, RAW files contain a wider dynamic range and broader color spectrum, which allows for more effective exposure correction and color adjustments.

You may have also noticed the inclusion of a color filter in Figure 1. The reason for this is that pixels detect light, not color, so a camera sensor by itself can only produce black & white images.

Until the introduction of the stacked sensor, CMOS sensors operated on a single layer. This meant the signal readouts from each pixel had to travel along strips of wiring all the way to the outside of the sensor before they were processed.

IsCMOS sensorfull frame

Image

Although the effects of the filter are so slight that they are invisible to many everyday photographers, blurring inevitably equates to a reduction in sharpness. This is undesirable for many professionals, and is one of the reasons Fujifilm developed the X-Trans color filter array.

The concepts are used almost interchangeably, as they define the same concept. However, their definitions are different. Let's discover them in detail.

The door is now open for huge future advances, equipping CMOS sensors with capabilities that simply weren’t possible only a few years ago.

Using a less uniform pattern helps reduce moiré, eliminating the requirement for an optical low-pass filter and in turn creating sharper images.

But what are camera sensors and how do they work? We aim to outline the basics behind the most common type of camera sensor and explain how this ever-crucial technology has evolved.

What’s more, without the problem of obstructing light entering the sensor, it’s possible to keep stacking additional chips, offering huge potential for future developments.

With stacked sensors, these processing chips have been added to the back of the sensor, essentially creating a ‘stack’ of chips sandwiched together.

Why the tiny iii, you ask? This formula holds for all the three possible directions on the sensor: horizontal, vertical, and diagonal.

Different types of software use distinct demosaicing algorithms, each offering unique aesthetics. An obvious advantage of this is that photographers can choose their personal preference, but the benefits of creating in RAW format extend much further.

As its name suggests, the back-side illuminated (BSI) sensor flips this original design around so the light is now gathered from what was its back side, where there is no circuitry.

This was a major problem in the early days of digital photography when sensor resolutions were lower. However, with sensors now enjoying much higher resolutions, moiré is less common.

As you can see in Figure 1, because the conversion and amplification processes happen on-pixel, the transistors, wiring, and circuitry have to be included in the spaces between each photosite.

In many cases, such as photographing on a smartphone, that is the end of the process. However, most mirrorless cameras have the ability to save images in RAW format, providing photographers with more options.

An optical low-pass filter – also known as an anti-aliasing filter – is a filter placed in front of a camera sensor to slightly blur the fine details of the scene being exposed, thereby reducing its resolution to a level below that of the sensor.

Whether you are planning a photoshoot or just a photography enthusiast, our camera field of view calculator will help you learn the whole picture.

The Bayer filter array (see Figure 2) is made up of a repeating 2×2 pattern in which each set of four pixels consists of two green, one red, and one blue pixel. This equates to an overall split of 50% green, 25% red, and 25% blue.

A rectilinear lens is a lens that preserves orthogonality: two straight, perpendicular lines in the real world are depicted as straight, perpendicular eyes by a rectilinear lens. A fisheye lens, on the other hand, distorts them — a small price to pay for an extremely wide field of view!

cmos sensorvs aps-c

While the basic operation of the CMOS sensor has remained fundamentally the same throughout its history, its design has evolved to maximize efficiency and speed.

The angle of view of a camera is an absolute measure of the horizontal and vertical angles captured by a combination of camera and lens. The field of view measures the same concept but uses lengths. Since the angles don't change, the field of view depends on the distance: specifically, it increases alongside it.

This signal is amplified on-pixel, then sent to an analog-to-digital converter (ADC), which converts it into digital format and sends it to an image processor.

As a rule of thumb, the higher the magnification of the lens is, the smaller the angle of view. If you manage to fit the Moon in your picture, the vertical angle of view would be around half a degree, but this feat requires a pretty high magnification. On the other hand, a 55 mm55\ \text{mm}55 mm lens on an SLR (single-lens reflex) camera will give you a horizontal angle of view of about 20°20\degree20°.

Even if angles are easy to visualize, they can be hard to estimate (how wide is 5°5\degree5°?). The field of view comes in handy to complement the idea of the angle of view.

The concept of field of view is not unique to photography: check how it differs for astronomers at our telescope field of view calculator. 🔭

Common instances in which moiré can be seen are when photographing brick walls from a distance, fabrics, or display screens. If the pattern being photographed misaligns with the grid created by the color filter array, strange effects appear, as illustrated in Figure 3.