Distortion inimage processing

I am taking a picture keeping the camera horizontally in landscape mode from a height of roughly 3-4 ft, acquiring the top-view of the scene. It was seen that the objects placed towards the centre of the scene seemed okay. However, any objects placed towards either of the edges/corners of the field-of-view of the camera seemed enlarged/distorted.

Fortunately, except for special technical applications, imaging with geometrically correct perspective is not a requirement.

Distortion in imagesfree

To understand what is happing you need to know that perspective as seen on the photographic image intertwines the focal length of the lens, the degree of magnification applied to make the displayed image and the viewing distance, observer to displayed image.

We apply magnification (enlargement) and display the picture on a computer screen or TV screen or make a print on paper. Say we view on a screen 18 inches wide, that’s about 450mm. The typical compact digital sports an imaging chip that about 24mm wide. The degree of magnification applied is 450 ÷ 24 = 18 ¾ X.

Imagedistortionmeaning

Try this: put your head where your camera is and look straight down. If there's an object right below you, you'll see just the top and none of its sides. But if you look over to the left or right, you'll see a bit of the sides of objects a foot or so away from the center — and quite a lot of the sides of objects further than that.

Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Imagedistortion inradiography

Why does such an increase in perceived length happen? Is it because of some kind of optical aberration, or any other camera property (lens, FOV etc.)

Distortion in imagesonline

The bottom line is: Our camera images are never faithful. What we normally see is an incorrect perspective because we are likely never viewing from the proper viewing distance. Again, it is fortunate that our images need not replicate the “human perspective”. Allow me to add, we see using a combination of eye and brain. The image on the retina is upside down and its surface is a section of a sphere. From infancy we learn to fathom what we are looking at. Clearly this skill is the most honed of all our intellects. Photography does not even come close to making equivalent images. If we could, you would need to dawn sunglass when looking at a beach vista.

Impossible for two reasons: 1. Today’s miniature cameras yield tiny images that if not enlarged, are worthless. 2. The human eye can't focus on a picture held only 30 or so millimeters from the eye. What must we do to view?

What you are observing is: Objects close to the camera reproduce large and object further from the camera reproduce small. This is not a digital thing vs. an analog thing, welcome to the world of “perspective”.

Many people think that this is a property of wide-angle lenses, but that's not actually the case. It's just more apparent because with a wide angle lens you're putting closer objects from, well, a wider angle into the frame. If you change where you stand and use zoom or a different lens to get a similar framing, the perspective distortion will be different. On the other hand, no matter what you do with lenses, there's no way to change the perspective without literally changing your perspective by moving.

I will try to explain: If you stand before a window, you can trace on the glass with wax pencil, the outlines of objects. This drawing reveals the “human” perspective. We can duplicate this outlook with the camera. We place the camera in the same location as was the human eye. We snap the picture. It makes no difference as to camera size or focal length. To replicate the “human” perspective we must view the resulting picture from a distance equal to the focal length of the taking lens. Given today’s miniature cameras, viewing from this distance, is likely impossible.

To further test this, I kept an object of a known length (e.g. a ruler) towards the top-right corner of the image/scene and measured its length (in px) using GIMP post image-acquisition. What I found was that the number of pixels covered by the scale changed non-uniformly as we moved towards the corner. I measured the no. of pixels between 0-5 cm, 5-10 cm and so on of the scale. For the first few buckets, the number of pixels for 5 cm of the scale was constant. However, it increased while considering the 15-20, 20-25 or 25-30 cm buckets.

Distortion in imagesexamples

Now the rest of the story - To view achieving the “human” perspective, we must back away from the displayed image. While not cast is stone, the correct viewing is the magnification applied multiplied by the focal length of the taking lens. Say this image was taken with a 30mm lens. We calculate the approximate correct viewing distance as 30 X 18.75 = 560mm (approximated) = 22 inches.

Image

This is perspective distortion, and it is an inevitable outcome of projecting a three-d world onto a flat surface. See What is the difference between perspective distortion and barrel or pincushion distortion? for some details on different kinds of distortion, and How to correct perspective and geometric distortion? for what you can do about it.