So now that we have a photon crashing into your camera, converting to an electron and counting, we must discuss that a pixel is actually 3 dimensional.  It’s like a water well hole in the ground, and that well is collecting rain drops in it.  How deep your well goes is called the cameras “Full Well Depth” and you can see this on camera specifications.  Pixels don’t just collect a single electron and turn on/off, rather they capture a gradient of light in this well depth, from pure black to full white.  The deeper the well, the depth or image will be.

Camerasensor

We all think of pixels on a camera or television monitor or computer screen as a dot.  I believe most of us understand that principle and we can move past the basic pixel being a relative dot, but how is the light actually captured and converted in this, we want to discuss because it gives very clear understanding of how camera gain and exposure length come into play, which is our primary goal at the end of this section.  So let’s jump into some basic language and definitions as we explore the pixel from start to finish.

Feel free to use any of the images on here as you see fit.  You can either give credit or not give credit if you use an image, the pictures taken on this site free to use wherever and however you like.  Drop us a line if you use any so we can see how they look!

Rays with angles of incidence larger than θmax refract at and pass through the interface between the core and cladding. This light may travel in the cladding for a while but is eventually lost from the fiber.

Camera gainreddit

You may see all cameras come with specifications on your bit depth of your camera.  This is another very important factor.  Your well depth maybe 20ke or it maybe 50ke, which describes how deep that well goes, but it is meaningless unless we consider the actual measurement points along the way.   Our Analog to Digital Converter can be in a variety of sizes, just like a tape measure may have feet, inches, quarter inch, eighth inch, maybe sixteenth of an inch markings on it, or maybe thirty-second inch markings for very precise measurements.  Same holds true for our pixel well depth.

Now that we understand a pixel on our sensor, let’s discuss camera gain.  Every camera manufacturer will give you specifics on the particular unit you look to purchase.  These are important.

Imagine if you have a well with only a few electrons in that pixel well, you will have almost pure black, but if you fill that pixel well up with electrons to the top, you get pure white.  And of course everything in between makes your gradients.  So now you see the deeper the well, the more gradient your image can have.  A camera with 20ke units of well depth  vs a 50ke units of well depth are easy to understand… 50ke units being better gradients black to white.

Gainto ISO conversion

Image

Geometry Defines the RelationshipThe relationship between NA and θmax can be found using the geometry diagrammed in Figure 2. Snell's law was used at both interfaces, and the substitution sin(90°) = 1 was made. This geometry illustrates the most extreme conditions under which TIR will occur at the boundary between the core and cladding.

CameraISOgain

If taking normal nebula images that are fairly normal in brightness, unity gain seems preferred with a better SNR ratio.  one electron to one digital unit does a good consistent job so long as I don’t go over bright on the stars and get those FWHM numbers to high.

Just because a camera has a well depth of 50ke doesn’t mean much unless we know it has a lot of measurement indicators along that well depth.  a camera with 8-bit ADC will have just a few ADU measurements, while a 12-bit or 16-bit has many more fractional measuring points for ADU.  The higher the bit rate your camera offers the better because each of our electrons dropping into that bucket will get recorded.  This is why you see more precise cameras with DEEP  Full Well capacity and HIGH ADC bit rates.

Rays with an angle of incidence ≤θmax are totally internally reflected (TIR) at the boundary between the fiber's core and cladding. As these rays propagate down the fiber, they remain trapped in the core.

Let’s take high gain setting above unity.  Why do it?  This is the opposite effect of low gain.  Now when an electron strike comes into our pixel well, it can count as more than 1 ADU.  it might be 1.2 it might be 2 ADU.  You are making the pixel well fill up faster with fewer electrons.  This is great right?  You also will usually have less noise and shorter collection time frames for your exposures since it fills the pixel well quickly.  The tradeoff here is you reduce your well depth.

Camera gainformula

Light that has travelled for many miles is about to hit your camera, we take what is considered an analog unit (photons) and we want to convert those to a digital unit or a signal that can thus be interpreted on a camera.   Analog to Digital Units (ADU) are what use to measure those ‘hits’ of light that get converted.  Photons hit your sensor and are converted electrons and count as a unit that is essentially converted to digital at this point.

Image

To understand Astrophotography, you first must master what it is we are discussing when it comes to your actual camera and light gathering tool.  The telescope, mount, guiding systems, observatories and all other equipment lead up to this single moment in time when the photon that has been traveling for many many years finally touches down.  Your camera and sensor are the final resting point when those photons need to be converted into your camera to digital units and cataloged and recorded.

With our shorter exposures we go higher gain since an electron in our pixel counts as more than one digital unit.  Smaller well depth of our image but also less noise.

Numerical aperture (NA) provides a good estimate of the maximum acceptance angle for most multimode fibers, as shown in Figure 1. This relationship should not be used for single mode fibers.

Of course the higher efficiency, the better chance you capture and make better use of your time taking astro images.  All sensors will miss some electron capturing, so a 100% QE factor is not going to be something to look for, but the higher the better.

If taking a very deep target with very faint objects, I may shoot with a really long exposure time of 300s or 600s with gain 0 or gain 75 because I can capture that well depth and gradient change on an object as well as process out the stars with separate imagery.  Longer exposures means the stars will get blown out, but I can capture those minute changes in light better with a full well depth of a camera.

Figure 1: Rays incident at angles ≤θmax will be captured by the cores of multimode fiber, since these rays experience total internal reflection (TIR) at the interface between core and cladding. A requirement for TIR is that ncore > nclad .

Angles of Incidence and Fiber ModesWhen the angle of incidence is ≤θmax , the incident light ray is coupled into one of the multimode fiber's guided modes. Generally speaking, the lower the angle of incidence, the lower the order of the excited fiber mode. Lower-order modes concentrate most of their intensity near the center of the core. The lowest order mode is excited by rays incident normally on the end face.

So who wins overall?  It really does depend on targets, equipment, and skies… but my experiment gave me some good insight.

Acceptance Angle and NAIn the ray model of light, a ray's angle of incidence determines whether or not it will be coupled into the fiber's core. The cutoff angle is the maximum acceptance angle (θmax ), which is related to the NA (Figure 1).

Camera gainvs exposure

A good example is this graph from ZWO’s ASI 1600mm pro camera.  A popular digital monochrome camera that shows us here some good information.

Camera gaindB

In this graph, we can see the Quantum Efficiency of our camera and at which wavelengths.  This manufacturer gives it’s efficiency rating at around 60%, while it can depend on which wavelength of light we look at.  Good comparison to look at over a spectrum of cameras to make a good educated choice.

Looking at AstroPixelProcessor, it has read out these 4 images and shows us some good facts about our image, though they look pretty similar.

Gainvs ISO

There are no in-betweens when it comes to converting to a digital unit, you can not have 55.5 ADU, either you have a capture or you don’t, so it’s important to understand why you can change the gain of a camera.. and what the consequences of doing this will have.  Some situations may call for low gain, some situations may call for high gain, it really depends on a lot of factors including what target you are capturing, your sky quality, and the equipment you use.

Single Mode Fibers are DifferentIn the case of single mode fibers, the ray model in Figure 2 is not useful, and the calculated NA (acceptance angle) does not equal the maximum angle of incidence or describe the fiber's light gathering ability.

Now that you have photons dropping into your pixels and understand that they are captured in this 3 dimension deep well, a camera will also have a specific Quantum Efficiency to it.  While the word is fancy and scary scientific, it just means how efficient is it at capturing those photons.  Some cameras are 60% efficient, others are 95% efficient.

Figure 2: The behavior of the ray at the boundary between the core and cladding, which depends on their refractive indices, determines whether the ray incident on the end face is coupled into the core. The equation for NA can be found using geometry and the two equations noted at the top of this figure.

Image

My experiment works for me, and it might work for you.  It might not be perfect, but it gives a good base for what a camera will do and how to chose the proper gain for you and your equipment/situation.  Hopefully this guide has helped explain a few terms and critical areas in getting your shooting gain and exposure correct.

It’s actually very hard to tell by looking at these.  As the gain went lower, we needed more exposure length to compensate.  For example, if we go Gain 0 we need 5 electrons to count as 1 digital unit in our ADU conversion process.  So instead of 60 seconds, we go for 300 seconds.  The benefit is we get a much deeper well depth or gradient spectrum.

The refractive indices of the core and cladding, ncore  and nclad , respectively, play a key role. In order for TIR to occur, ncore must be larger than nclad . The greater their difference, the larger the NA and maximum acceptance angle.

Single mode fibers have only one guided mode, the lowest order mode, which is excited by rays with 0° angles of incidence. However, calculating the NA results in a nonzero value. The ray model also does not accurately predict the divergence angles of the light beams successfully coupled into and emitted from single mode fibers. The beam divergence occurs due to diffraction effects, which are not taken into account by the ray model but can be described using the wave optics model. The Gaussian beam propagation model can be used to calculate beam divergence with high accuracy.

You may here manufacturers talk about what the unity gain setting is for their given camera.  What this is referring to is that one electron will convert to one digital unit in the pixel well.  So if you have a camera which states unity gain is 100 or 139, then when you set the camera to that setting, for every electron strict that makes it to the pixel well (disregarding QE mess-ups), it will convert to one full digital unit on that pixel.

This is a loaded question, and like asking which is better, Ford or Chevy.  It can depend on a lot of factors such as how dark your skies are in your area, what Bortle zone you take from, what optical equipment you have, how accurate can your mount track…  and every article will say it just depends, and it does, but let’s have a real world example here.

Let’s take low gain setting below unity.  Why do it?  Most manufacturers will provide a graph of the change in gain vs the change in well depth.  The lower you go in gain, the higher the well depth becomes… hence those full well changes will give you more gradient.  However, doing so means that it might take several electrons to equal one ADU.  This means you often need longer exposures to capture more light since the gathering process will be slower.

Real world what we get in 30 minutes of light is pretty similar in reality.  It’s actually capturing the same amount of light, but it can perhaps be explained this way.