The concept of image processing
In principle, all astronomical images must be processed/enhanced to display all the information they contain. In this
little article, I would like to give you some practical examples for a better understanding.
The light intensity in astronomical images can differ by many orders of magnitude - from super bright stars
to extremely faint gas that is part of a nebula. For example, a typical astronomical image contains (bright) stars
with magnitudes of about 4 (these are the already fainter stars visible when you look at the night sky without a
telescope or a binocular). On the other hand, the same image might contain very distant galaxies with
magnitudes around 20 (there are many even fainter ones!). This means that the star in our image is more than 2.5
million times brighter than the galaxy (the magnitude scale is a logarithmic scale, see
here)! For comparison, the sun as we see
it from the Earth is 'only' about 400,000 times brighter than the full moon we see at night. This means that there is a
huge dynamic range to deal with and we will never be able to show both objects with their natural (linear) brightness
relations in a single image without adjusting it.
Our light recording devices naturally (and ideally) record incoming photons linearly, i.e., the pixel output intensity
increases linearly with the number of photons striking the corresponding pixel area. In our example, this means that
there are 2.5 million photons reaching the sensor form the star source and one single photon from the galaxy during
the same time interval. A typical astronomical camera is capable of differentiating between 65,536 intensity levels (16bit). If the
shutter of the camera would be open just as long as it takes to detect a single photon from the galaxy, the star in our image
would be oversaturated by a factor of more than 38! In this case it is simply impossible to represent both objects in
a single image without cutting off the top part of the intensity histogram (i.e., ignoring further incoming photons from the star).
So at least two images, taken with different exposure times, would be needed to nicely represent both objects. And if this
should be done in a single image, the two recordings need to be merged digitally (and this is already called image processing).
Figure 1a: A unprocessed CCD-image (RAW) from the STL11000M camera. Exposure time was 30 minutes through a Luminance filter with about 1000 mm focal length. Messier 101 is just visible a bit below the image center. |
Figure 1b: Enlarged detail of Figure 1a showing a brighter unsaturated star of mag 14.1 (which is already quite faint in the global magnitude scale), a few other bright single pixels, a satellite trace, and some hints of background galaxies. The single bright pixels are a result of the dark current effect and have nothing to do with the actual signal we would like to capture. |
As shown in Figure 1a and 1b, the (faint) signal we are interested in is not very well visible because the the image is still in its linear form. We now can stretch the data non-linearly, i.e., brighten the dim pixels stronger than the already bright pixels. This non-linear enhancement function is equally applied to every single pixel in the image, which means that dark current pixels, satellite tracks, and artifacts (due to, e.g., sensor defects) are also enhanced. To prevent the enhancement of artifacts which do not belong to the sky signal, the image has to be corrected for (1) dark current and (2) uneven illumination. The reason we have to correct for the latter is due to the light drop towards the image corners (optical problem) and due to possible large scale sensor sensitivity differences.
Figure 2a: The same image as in Figure 1a with a single non-linear function applied. The light drop towards the image corners mentioned before is hardly seen here because the image was taken with a TEC140 APO which has very even illumination characteristics! Nevertheless the so-called FlatField calibration should always be applied to remove illumination artifacts due to dust particles as well as to remove sensor related sensitivity variations. |
Figure 2b: Enlarged detail of Figure 2a (same as in Figure 1b) showing the effect of the non-linear data stretch. Note that the brightness increase of dim objects is larger than for those that were bright already. Because we always apply a monotonic increasing (non-linear) function, the brightness relations(!) are not changed, i.e., the brightest star remains the brightest object and the faintest fuzz remains the faintest object. |
|
Figure 2c: Same as in Fibure 2b, but in this case image calibration (= dark current subtraction and FlatField calibration) were applied prior to the non-linear brightness modification. Note that the individual bright pixels in Figure 2b have disappeared. Obviously, the satellite trace remains. This is because it is not a systematic artifact that can be (simply) subtracted by another image that has only recorded this satellite. So we have to remove it by another process (see below). |
When looking at Figure 2c, we also note that the background noise became very dominant. When the non-linear stretch function
is applied, also low level background variation/noise is strongly enhanced (actually even more than the weak signal
we are looking for).
In principle, every measurement device (measuring distance, weight, wind speed,... or light) inherently adds an error
to the measurement, either caused by the electronic device itself or the environment. I don't want to write much about
the reason why there is noise in astronomical images (this could fill half a book) but it is important to understand that
there are at least two main components: Noise due to the camera electronics (analogue-digital-converter, thermal noise, etc.)
and noise due to the sky quality, i.e., sky transparency, atmospheric turbulences,... and lightpollution! Lightpollution
nowadays is by far the most dramatic reason why we have so much noise in our images. Imagine a half-transparent glowing
sheet of paper that you put in front of the telescope - this is what we are looking through when there is lightpollution.
No need to say that you will see a lot less than if you could remove this glowing sheet of paper. Unfortunately
it is not so trivial to switch off all (or at least some) of the unnecessary light sources glowing all night long.
See here, a comparison of the effect between moderate and low lightpollution!
As in all other measuring applications, we also want to increase the signal-to-noise-ratio (SNR) in order to get rid of the
noisy background and to extract even more of the faint background signal that so far was completely covered by the
noise. To do so, we take several images of the same object and merge them. The effect of merging several images of the
same object with uncorrelated(!) noise is that the noise amplitudes are reduced by the square root of the number of
images (i.e., merging 4 images improves the SNR by a factor of 2, merging 100 images improves it by a factor of 10).
This simple relationship between noise and signal has a strong implication: Imagine you take 100 images to reach a certain
depth in your image (for example reaching mag 20). If you now could move to a location were lightpollution produces
half the noise level you had before, you only need 50 images to reach the same image depth!
So far we were talking about noise due to lightpollution. As seen in Figure 2c and 3, however, there are other types of noise
or artifacts which need to be removed, for example satellites or airplanes that crossed the imaging area.
Additionally, image calibration never removes all systematic noise sources perfectly, i.e., there will always be some
pixels which are either completely white or black (hot or cold pixels).
As mentioned above, the next step is to merge several images of the same area. But before this can be done, all
(calibrated) images need to be accurately aligned with respect to each other. Usually a reference image is selected
whereon all other images are shifted, rotated, scaled, and eventually squeezed such that equal stars over the field of
view are located at the same pixel coordinates for all images being merged.
Figure 3: Illustrating another source of 'noise': Satellites. This image was generated by taking the strongest signal from 155 single images of the same area. All 155 images were aligned with respect to each other based on their star field. The 155 images represent a time slot of about 40 hours. Note that in general, the satellite crossing gets worse as the field of view (FOV) approaches the equator plane (many geostationary satellites and increasing the probability of others crossing the FOV!). The FOV of this area is at about 54 degrees North. Click here for a larger version! |
When looking at Figure 3 it is clear that 'merging' several images should be done in a clever way. If, for example, a
simple mean computation (for every pixel stack) over all images is applied, bright satellite trails will strongly influence the
resulting pixel value (although the mean value would provide the highest increase in the signal-to-noise-ratio!).
Therefore, a better statistical approach is needed to remove the outliers. One of the most robust methods is to take
the median. However, taking the median is not the best option because the increase in the signal-to-noise-ratio is moderate.
Fortunately there are a couple of more advanced rejection algorithms available. The basic idea for all of them is to
first detect all outliers, i.e., pixels that are outside a specific statistical tolerance level, before taking the mean
of the remaining pixels. These statistical algorithms work best if the images are in their linear form. There is also
a minimum number of images required to detect all outliers correctly.
Figure 4a: The result after merging 60 single images from the same area (24 hours total exposure time). Outliers such as satellites, defect pixels, errors during calibration, cosmic hits, etc. were statistically removed by appropriate statistical algorithms. A single non-linear enhancement function was applied after(!) the images were merged. |
Figure 4b: Enlarged detail of Figure 4a (same detail region as in Figure 1b, 2b, and 2c). Due to the increased signal-to-noise-ratio, a more aggressive non-linear function can be applied such that many very faint objects, that still are above the background noise level, become visible (compare with Figure 2c). |
|
Figure 4c: For comparison: Same as Figure 4b but here only 20 images (10 hours) were combined. Note the increased background noise level and how some of the faint background galaxies are within the noise level and not above as in Figure 4b. |
Up to this point image processing is relatively fast and standardized, i.e., there are batch software routines that do
the calibration, alignment, and merging almost automatically. Image processing however does not end here!
It is now time to think about applying not only a single non-linear function to the image but several ones to further
increase the contrast between the background and the signal. Here there are no standard routines but the experience and
taste of the individual astrophotographer. It would be is easy to modify the image such that it does not represent the natural
brightness variations (i.e., initially bright stars could be processed such that they are not as bright as a faint
background galaxy) but this should be prevented in any case. Some might say that image processing at this stage is mainly
an art - which might be true! I personally think that image processing should always be guided by scientific notions
and attention to detail. Therefore, it is my goal to process astronomical image data in a way that the scientific correctness is
retained and that the image appears aesthetic.
Figure 5: Final processing result using the luminance filtered image in Figure 4a. Click here for a larger version and the result when also using color filtered data! |