On the other hand, if we make the aperture size small, only a small number of photons hit the image sensor. As a result the image is dark and noisy.

Distortion inimage processing

Once you know what to look for, you'll start noticing rectangular prisms everywhere. Here are some everyday items that typically qualify as rectangular prisms.

Pincushiondistortion

There are several other parameters which you can change using the GUI provided in the repository. It will help you to have a better intuition about effect of various camera parameters.

For prisms, all the sides extend upward from the base area before being capped with a top face like a flat roof on a large building. Use the following formula variations to solve for both volume and surface area.

So, smaller the aperture of the pinhole camera, more focused is the image but, at the same time, darker and noisier it is.

Distortion in imagesfree

An oblique rectangular prism is a special case that allows us to draw parallelograms in three dimensions. These shapes closely resemble a regular right rectangular prism that is tipped off its axis to lean sideways.

Based on [1] there are 3 types of distortion depending on the source of distortion, radial distortion, decentering distortion and thin prism distortion. The decentering and thin prism distortion have both, radial and tangential distortion effect.

Once you calculate the volume of a three-dimensional shape with rectangular bases and six faces, ensure the measurement is recorded in cubic units to accommodate the object's three dimensions (e.g., inches3 or cm3).

So what do we do after the calibration step ? We got the camera matrix and distortion coefficients in the previous post on camera calibration but how do we use the values ?

LensdistortionExamples

The above figure is an example of distortion effect that a lens can introduce. You can relate figure 3 with figure 1 and say that it is a barrel distortion effect, a type of radial distortion effect. Now if you were asked to find the height of the right door, which two points would you consider ? Things become even more difficult when you are performing SLAM or making some augmented reality application with cameras having high distortion effect in the image.

The second step is performed using the getOptimalNewCameraMatrix() method. What does this refined matrix mean and why do we need it ? Refer to the following images, in the right image we see some black pixels near the edges. These occur due to undistortion of the image. Sometimes these black pixels are not desired in the final undistorted image. Thus getOptimalNewCameraMatrix() method returns a refined camera matrix and also the ROI(region of interest) which can be used to crop the image such that all the black pixels are excluded. The percentage of unwanted pixels to be eliminated is controlled by a parameter alpha which is passed as an argument to the getOptimalNewCameraMatrix() method.

Have you ever wondered why we attach a lens to our cameras? Does it affect the transformation defining the projection of a 3D points to a corresponding pixel in an image? If yes, how do we model it mathematically?

It is important to note that sometimes in case of high radial distortions, using the getOptimalNewCameraMatrix() with alpha=0 generates a blank image. This usually happens because the method gets poor estimates for the distortion at the edges. In such cases you need to recalibrate the camera and ensure that more images are taken with different views close to image borders. This way more samples near image border would be available for estimating the distortion, thus improving the estimation.

Imagedistortion inradiography

The distCoeffs matrix returned by the calibrateCamera method give us the values of K_1, to K_6, which represent the radial distortion and P_1 , P_2, which represent the tangential distortion. As we know that the above mathematical model representing the lens distortion, includes all the types of distortions, radial distortion, decentering distortion and thin prism distortion, thus the coefficients K_1 to K_6 represent the net radial distortion and P_1 and P_2 represent the net tangential distortion.

Distortion in imagesonline

By using a lens we get better quality images but the lens introduces some distortion effects. There are two major types of distortion effects :

We mathematically model the distortion effect based on the lens properties and combine it with the pinhole camera model that is explained in the previous post of this series. So along with intrinsic and extrinsic parameters discussed in the previous post, we also have distortion coefficients (which mathematically represent the lens distortion), as additional intrinsic parameters.

One application is to use the derived distortion coefficients to un-distort the image. Images shown below depict the effect of lens distortion and how it can be removed from the coefficients obtained from camera calibration.

In a previous post, we went over the geometry of image formation and learned how a point in 3D gets projected on to the image plane of a camera.

Distortion in imagesexamples

On the other hand, with a larger aperture, the image sensor receives more photons ( and hence more signal ). This leads to a bright image with only a small amount of noise.

Imagedistortionmeaning

The model we used was based on the pinhole camera model. The only time you use a pinhole camera is probably during an eclipse.

Now we have a better idea of what types of distortion effects are introduced by a lens, but what does a distorted image look like ? Do we need to worry about the distortion introduced by the lens ? If yes why ? How do we deal with it ?

To generate clear and sharp images the diameter of the aperture (hole) of a pinhole camera should be as small as possible. If we increase the size of the aperture, we know that rays from multiple points of the object would be incident on the same part of the screen creating a blurred image.

We replace the pinhole by a lens thus increasing the size of the aperture through which light rays can pass. A lens allows larger number of rays to pass through the hole and because of its optical properties it can also focus them on the screen. This makes the image brighter.

[1] J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(10):965–980, Oct. 1992.

A rectangular prism is a three-dimensional cuboid figure. In the same way that a triangular prism brings three connected lines to life in the real world, a rectangular prism takes the length and width of two-dimensional rectangles to the next level by adding height into the equation.

Right rectangular prisms are the most common, with real-life examples including buildings, shoe boxes and pretty much any other three-dimensional object resembling a four-sided fish tank.

All rectangular prisms consist of eight vertices that are connected by 12 edges ending at right angles. However, there are some differences between these types of three-dimensional shapes.

Remember that the total surface area includes all the faces of the prism. There are six rectangular faces, so you must account for the lateral surface area of the sides with the addition of top and bottom faces. This is illustrated mathematically as:

Great ! So in this series of posts on camera calibration we started with geometry of image formation , then we performed camera calibration and discussed the basic theory involved we also discussed the mathematical model of a pin hole camera and finally we discussed lens distortion in this post. With this understanding you can now create your own virtual camera and simulate some interesting effects using OpenCV and Numpy. You can refer to this repository where a a virtual camera is implemented using only Numpy computations. With all the parameters, intrinsic as well as extrinsic, in your hand you can get a better feeling of the effect each parameter of the camera has on the final image you see.

Empowering innovation through education, LearnOpenCV provides in-depth tutorials, code, and guides in AI, Computer Vision, and Deep Learning. Led by Dr. Satya Mallick, we're dedicated to nurturing a community keen on technology breakthroughs.