DSLRs use another method to increase the fill-factor and instead have a microlens array. The ultimate is a gapless microlens array which gathers all incoming light towards photosites. I imagine they could use BSI instead but somethimes the microlens array is used to corrects for light fall-off due to angle of indicence, something which BSI does not help with.

DSLRs use CMOS because of the speed, although CCDs are known to be of higher image quality and those are used in Medium-Format digital cameras and backs, although shooting speed is often limited to 1/2 - 2 FPS, while DSLRs can shoot at over 10 FPS.

BSI CMOS

Back-Side-Illumination (BSI) means that the circuitry is on the opposite side versus incoming light. This lets it capture more light compared to a standard design.

Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

The iPhone 4s uses a backlit CMOS sensor, and I have noticed that some other point and shoot cameras do as well. What does that mean for photography, and if it is a benefit why do DSLR cameras not use it?

Normally in the manufacture of a camera sensor the photosensitive "pixels" are formed on top of a silicon wafer, onto which several layers of circuitry are added to facilitate reading out the pixel values. This circuitry blocks some of the incident light from hitting the photosensitive areas, reducing sensitivity of the sensor (thereby requiring more amplification, which increases noise).

Optical format

The only commercial BSI sensors to date are very small units, cell phone and compact sizes. The technology is regarded by some as a bit of a marketing gimmick, not really producing the claimed benefits. This is principally due to:

BSI sensors are created in the same way, but the silicon wafer is flipped over and ground down to make it thin enough for light to shine through from the other side. The readout circuitry no longer gets in the way and allows the sensor to capture up to twice as much light.

Image

Back sideilluminated

Gains from moving the wiring to the back are apparently greatest when the pixel sizes hit around 1.1 microns (such as the case with the 8MP iPhone sensor). For larger pixels the losses due to the wiring are not as great (as there's more space for the wires).

There are problems associated with this technique: mounting the circuitry that way increases cross-talk, whereby signals on different lines interfere with each other - this can cause pixels to bleed into each other.

Having the metalisation layer on the front also causes diffraction effects which are significant as the pixels are only a couple of times the wavelength of light.