Binning in astrophotography [Deep Sky] Acquisition techniques · Francois Theriault · ... · 26 · 1135 · 5

DaveDE 0.00
...
· 
Just an fyi, turns out binning  2x2 on the camera and losing the two LSBs really doesn't affect SNR significantly at all.

https://forums.sharpcap.co.uk/viewtopic.php?p=34676#p34676

Dave


That link is a very simplified description of what happens when you truncate, and it can be made more rigorous.  Instead of talking about "average" error introduced, it's more useful to talk about the standard deviation of the error, and when you discretize a signal into steps of size, s, the error introduced is s/sqrt(12) - which is much less than the "average" error of s/2 or whatever.  Knowing the error as a standard deviation allows you to combine it in quadrature with read noise - and the result is a very real, but very small, contribution when summing 4 pixels at gain 0.8.  And it makes no difference if you round down or up or whatever.

This discretization noise happens even when you don't bin, because the intrinsic read noise will be added in quadrature with the discretization noise of g/sqrt(12), where g is the gain as e/adu.

For gain 0.8 and read noise 3.5, the total read noise with discretization error is slightly bloated to 3.508.  If you sum 4 pixel values exactly the noise is doubled to 7.016, but if you then discretize it in steps of 4 (3.2e), corresponding to dropping the final two bits, you end up with noise in the sum of 7.076 - which is a real but extremely small - and negligible - increase on the exact sum.  (I welcome corrections on the math).

So - it's not good to think in terms of "those low bits are just noise" because those bits also have signal.  The entire set of bits represents the signal and noise - and discretizing or truncating at any level will increase the total noise - but by an amount much smaller than the step size due to the 1/sqrt(12) factor - and the fact that the noise adds in quadrature with the other sensor noise terms (here, read noise).

This describes the impact of discretization of the signal itself, while the original topic of this thread relates to *spatial* binning and discretization of the image and its impact on resolution - and the situation is very similar.  There is always blurring happening on the scale of the pixels, and smaller pixels, in arc-sec, will result in less total blurring in the final result.  This is because the process of aligning and stacking multiple exposures requires shifting and interpolation - on the scale of the pixels - prior to stacking.  And that results in a blur contribution *on the scale of the pixels*.  Smaller pixels means less blur and a smaller fwhm *in the aligned and stacked result*.  There is no sudden point where smaller pixels cease to have resolution benefit because this blur is always happening - just as there is always error introduced by discretization at any size of signal step.

This is also why it is best to defer the final binning or smoothing until the last stage of processing - so the alignment can be done using the original unbinned pixels.  It's also why, for max resolution, you should never bin during acquisition - even though the impact of discretization noise is small.  But if you aren't after max detail - you can go ahead and bin and use any size pixels you want, with a corresponding trade off of pixel SNR for detail.

The amount of  blur depends on the type of interpolation used when stacking, but recently PI switched to recommending 1:1 drizzle over things like Lanczos - and that will definitely result in blur being introduced to each exposure in the stack.  I prefer to use small pixels and nearest neighbor, for a number of reasons, hence I use 0.28" pixels with EdgeHD11 and get stacked fwhm's in the low 1".  That would never be possible with the typically recommended 0.5-1" pixels for such an SCT.

Frank

Yes Frank, I agree the link I mentioned has a simplified explanation of what happens when binning 2x2 on the camera, and how averaging four 16 bit pixels requires 18 bits and a very small error is introduced when truncating the result to 16 bits. The s/sqrt(12) factor is the expected RMS quantization error of the camera sensor digitizer. That's already included in the read noise and is not the truncation error we are talking about here (caused by pixel averaging and tossing the fractional part of the result). So I'm a bit confused why you are talking about the sqrt(12) factor. In any case, I think we both agree that as you say, the increase in noise caused by binning on the camera is "extremely small - and negligible".

Speaking of rigor, I'm not doubting your conclusion but would like to see some rigor with regard to the assumption that smaller pixels result in significantly better subframe registration than larger binned pixels (with their higher associated SNR). Is there a source you can point me to here? I'm wondering at what point the diminishing returns of smaller pixels become insignificant, ie; if you could use 0.14" pixels on your EdgeHD11 for registration, would you? Thanks.

Dave
Like
Freestar8n 1.51
...
· 
·  1 like
Yes Frank, I agree the link I mentioned has a simplified explanation of what happens when binning 2x2 on the camera, and how averaging four 16 bit pixels requires 18 bits and a very small error is introduced when truncating the result to 16 bits. The s/sqrt(12) factor is the expected RMS quantization error of the camera sensor digitizer. That's already included in the read noise and is not the truncation error we are talking about here (caused by pixel averaging and tossing the fractional part of the result). So I'm a bit confused why you are talking about the sqrt(12) factor. In any case, I think we both agree that as you say, the increase in noise caused by binning on the camera is "extremely small - and negligible".

Speaking of rigor, I'm not doubting your conclusion but would like to see some rigor with regard to the assumption that smaller pixels result in significantly better subframe registration than larger binned pixels (with their higher associated SNR). Is there a source you can point me to here? I'm wondering at what point the diminishing returns of smaller pixels become insignificant, ie; if you could use 0.14" pixels on your EdgeHD11 for registration, would you? Thanks.

Dave

Hi Dave-

Yes - the discretization is normally included in the read noise and need not be added - but I did it to show it is playing a role even for the unbinned case.  You could figure out what the intrinsic analog read noise is prior to discretization and it would be a tiny bit less than 3.5.  Discretization then brings it up to 3.5.

The s/sqrt(12) applies any time a signal is discretized into steps of s units.  If you take the digital values for four pixels and add them, the sum will have error/noise as described above at 7.016e.  But if you then drop the last two bits, you are discretizing the result in steps of 4 ADU, or 3.2e.  That will be an additional noise term that adds in quadrature - to make a final 7.076e.   So you know that dropping the last two bits is a tiny effect.  (The s/sqrt(12) is the standard deviation of a uniform distribution of width s).

How many bits can you tolerate dropping?  Well - if this is a good deep sky subexposure than read noise should be a small part of the noise, and sky background noise should be much larger in order to do its "swamping."  So if the total read noise is 7.016e and you want to swamp it by 5x background noise (or 10x or whatever) the sky background noise is about 35e.  How many bits can you drop to equal that?  s/sqrt(12) = 35e -> s = 121e = 152 ADU = 7+ bits.

So you could do the sum of four pixels and then drop 7 bits and still the background noise would dominate the noise introduced by discretization.  If you 10x swamp it is 300 ADU or 8+ bits.

This is based on regarding the discretization as purely a random noise term - but in reality it could lead to posterization for large amounts of truncation.  You can have purely random noise that is visually tolerable compared to structured noise that isn't - despite having the same rmsd.

A side point in all this is that the inherent dynamic range of a sensor is of little importance when you are sky background limited - and the dynamic range is intentionally squashed by the sky background noise.

As for the pixel size question - which is the original point of this thread - the two factors overlooked with regard to optimal sampling in astro imaging are 1)  The pixels don't sample at discrete points as required by the Nyquist theorem.  Instead they represent averages over a square region - and that loses bandwidth.  This point is rarely made in audio contexts, and only advanced texts describe it in the imaging context - but it is obviously happening.  2)  Additional bandwidth is lost when aligning and stacking the exposures - and most people aren't even aware this is happening because it happens under the covers with stacking software.  The only way to avoid this blurring is to allow some kind of sharpening interpolation - but that is a form of a "cheat" that boosts bandwidth artificially and no longer represents the original discrete values at each point in the image (which I assume plays a role in why PI no longer recommends it.  I always avoided such things).  You can similarly deconvolve and sharpen the final image and create arbitrary high frequency information that shouldn't be there.

If you use 1:1 drizzle it is very similar to bilinear interpolation - and you can see examples of how that blurs an image when it shifts or rotates slightly.  Nearest neighbor doesn't really blur each exposure - but its end result is to provide some bloat in the final aligned stack.

A side piece of evidence is that many people thought they were "optimally" imaging with 9um pixels - but when they switched to cmos they saw much more detail in the star shapes indicating collimation and alignment problems.  That immediately tells you the original sampling was losing bandwidth that was only capturable with much smaller pixels.

Frank
Edited ...
Like
 
Register or login to create to post a reply.