Gradient Reduction Part 2 (DynamicBackgroundExtractor
The AutomaticBackgroundExtractor (ABE) is a powerful tool to remove gradients, and it has the upside of being relatively easy to use (and understand). However, it suffers from one big defect. The user cannot place points where they think they should be. The brain is an incredible signal processor, and it can detect things that automatic tools simply cannot (at least at this point). What the automatic tool thinks is just a bad gradient to be removed may in fact be part of the signal you are trying to preserve. This is especially the case with dim Nebulae.
The creators of PixInsight recognized this, and provided another process for gradient reduction called DynamicBackgroundExtraction (Yes, there is an inconsistency in the naming of the two tools with one being Extractor and the other being Extractor - Unfortunately PixInsight's creators could have been more careful when it came to consistency).
Open the version of the tutorial image that you dynamically cropped that is unstretched (Linear), do a STF (ScreenTransferFunction), [ctrl]a, on it, and then bring up the DynamicBackgroundExtraction process.
The AutomaticBackgroundExtractor (ABE) is a powerful tool to remove gradients, and it has the upside of being relatively easy to use (and understand). However, it suffers from one big defect. The user cannot place points where they think they should be. The brain is an incredible signal processor, and it can detect things that automatic tools simply cannot (at least at this point). What the automatic tool thinks is just a bad gradient to be removed may in fact be part of the signal you are trying to preserve. This is especially the case with dim Nebulae.
The creators of PixInsight recognized this, and provided another process for gradient reduction called DynamicBackgroundExtraction (Yes, there is an inconsistency in the naming of the two tools with one being Extractor and the other being Extractor - Unfortunately PixInsight's creators could have been more careful when it came to consistency).
Open the version of the tutorial image that you dynamically cropped that is unstretched (Linear), do a STF (ScreenTransferFunction), [ctrl]a, on it, and then bring up the DynamicBackgroundExtraction process.
Expand the Model Parameters (2), Sample Generation, and Target Image Correction sections of the DynamicBackgroundExtraction (DBE) interface. It should look something like this. (We won't be doing anything with the Model Image portion right now).
Let's dispense with the easiest section first. The Target Image Correction has two controls we will be using a lot.
Correction does the same thing as in ABE. We can select either to subtract the gradient from the image (Subtraction), or divide by the gradient (Division). The reason for using one or the other is the same as with ABE. Generally, you use Subtraction for light pollution gradients and Division for imperfect flats or for atmospheric dispersion. Again I would suggest that for most gradients you start by trying division.
Normalize matters when doing work on color images, which we haven't gotten to yet. Basically, you check normalize if you want the background to have about the same color balance as it did originally. You leave it unchecked if you want the background color balance to be more neutral. In practice, usually you want to leave this unchecked.
For this tutorial image, change correction to Division.
You will notice there is a section called Sample Generation. This portion of the interface is used to automatically generate samples, similar to ABE.
Set Sample Radius to 11. Leave the other values and the defaults and generate some samples (Generate). You should end up with something similar to the following image.
- Correction
- Normalize
Correction does the same thing as in ABE. We can select either to subtract the gradient from the image (Subtraction), or divide by the gradient (Division). The reason for using one or the other is the same as with ABE. Generally, you use Subtraction for light pollution gradients and Division for imperfect flats or for atmospheric dispersion. Again I would suggest that for most gradients you start by trying division.
Normalize matters when doing work on color images, which we haven't gotten to yet. Basically, you check normalize if you want the background to have about the same color balance as it did originally. You leave it unchecked if you want the background color balance to be more neutral. In practice, usually you want to leave this unchecked.
For this tutorial image, change correction to Division.
You will notice there is a section called Sample Generation. This portion of the interface is used to automatically generate samples, similar to ABE.
- Default sample radius controls the size of the automatically generated boxes.
- Samples per row, controls how many rows of samples will be generated (and thus how far apart the samples are).
- Minimum sample weight, controls how much statistical weight a sample must have in order to be generated.
- The various color options simple control the sample box colors, I never change them.
- Resize all causes all samples already generated to be resized to however you currently have Default sample radius set
- Generate causes samples to be generated
Set Sample Radius to 11. Leave the other values and the defaults and generate some samples (Generate). You should end up with something similar to the following image.
With my experience, I can take on look at this and tell that DBE did a horrible job of selecting points. If I was to actually apply this by dragging the New Instance blue triangle over the image and releasing it, I would get this.
While this doesn't look hideous at first glance, it is in fact a VERY poor result.
Notice how the nebulosity to the lower left is about the same brightness as the background without nebulosity in the upper right. Likewise notice how the background to the upper left is so much darker than the background to the upper right.
Now sometimes you can only discern these sorts of things after you have either worked with an image for a while or look at what others have produced with a given object. You don't win a gold medal for purity of thought by avoiding the taint of looking at other people's images.
Anyway, it was clear to me just from where the points were placed the result would be no good, because some of the more prominent points were clearly in nebulosity where clearly background areas were being ignored.
That brings us to another very important part of the interface, Model Parameters (1). There are three parameters:
Of these, the most important and most often changed by the user is the Tolerance parameter. This is a sigma factor (standard deviation based) that allows more or less pixels in a sample to be accepted when calculating the gradient model. The default of .5 is quite restrictive. This works in tandem with the Minimum sample weight from the Sample Generation section. By increasing the Tolerance value, more pixels will be accepted which leads to a higher weight for that sample. That in turn means the sample is more likely to be accepted. If we increase the tolerance to 2.5, notice how many more samples get generated.
Notice how the nebulosity to the lower left is about the same brightness as the background without nebulosity in the upper right. Likewise notice how the background to the upper left is so much darker than the background to the upper right.
Now sometimes you can only discern these sorts of things after you have either worked with an image for a while or look at what others have produced with a given object. You don't win a gold medal for purity of thought by avoiding the taint of looking at other people's images.
Anyway, it was clear to me just from where the points were placed the result would be no good, because some of the more prominent points were clearly in nebulosity where clearly background areas were being ignored.
That brings us to another very important part of the interface, Model Parameters (1). There are three parameters:
- Tolerance
- Shadows relaxation
- Smoothing factor
Of these, the most important and most often changed by the user is the Tolerance parameter. This is a sigma factor (standard deviation based) that allows more or less pixels in a sample to be accepted when calculating the gradient model. The default of .5 is quite restrictive. This works in tandem with the Minimum sample weight from the Sample Generation section. By increasing the Tolerance value, more pixels will be accepted which leads to a higher weight for that sample. That in turn means the sample is more likely to be accepted. If we increase the tolerance to 2.5, notice how many more samples get generated.
The result with this increased Tolerance parameter is still very poor. The upper left and upper right areas are now at least similar but unfortunately, the nebula in the lower left is also very similar in intensity.
Let's try something else. Increasing the Shadows relaxation parameter makes it more likely that dark points that would normally be rejected by the tolerance parameter will be allowed. So increasing it is similar to increasing the tolerance parameter, except this affects just darker pixels in the image. Those darker pixels are more likely to be part of your background. Here I have set tolerance back to the default of .5 but I have increased Shadows relaxation to 6.
Let's try something else. Increasing the Shadows relaxation parameter makes it more likely that dark points that would normally be rejected by the tolerance parameter will be allowed. So increasing it is similar to increasing the tolerance parameter, except this affects just darker pixels in the image. Those darker pixels are more likely to be part of your background. Here I have set tolerance back to the default of .5 but I have increased Shadows relaxation to 6.
Notice how some of the sample boxes that were clearly in background areas have again been generated, but some of the sample points on the nebula have been left out. This is certainly progress in the proper direction.
This is our best result so far. The two upper corners are now similar in brightness and the nebula to the lower left is somewhat brighter than either. If we were going to take a result from automatically generated points, this would be by far the best of the bunch to use.
The last parameter in the Model Parameters (1) section is Smoothing factor. The way sampled intensities are interpolated in DBE and ABE differs. ABE uses a polynomial fit and you can control the degree of the polynomial using the Function degree parameter. DBE uses a spline fit. If the Smoothing factor is set to zero then the spline fit will exactly match the intensities of the sample boxes at their locations. This can lead to a rather lumpy result and in general some smoothing is desirable. The default works well for many purposes.
The last parameter in the Model Parameters (1) section is Smoothing factor. The way sampled intensities are interpolated in DBE and ABE differs. ABE uses a polynomial fit and you can control the degree of the polynomial using the Function degree parameter. DBE uses a spline fit. If the Smoothing factor is set to zero then the spline fit will exactly match the intensities of the sample boxes at their locations. This can lead to a rather lumpy result and in general some smoothing is desirable. The default works well for many purposes.
The gradient that was extracted to the right is clearly smoothed out compared to the gradient on the left where Smoothing factor was set to zero. Note the dark area in the middle that isn't quite as dark in the image on the right compared to the un-smoothed gradient on the left.
For those who really want to understand spline fitting vs polynomial fitting, I refer you to the Wikipedia article on spline fitting:
https://en.wikipedia.org/wiki/Spline_(mathematics)
Fortunately for the rest of us, the details aren't particularly important. What is important is that a polynomial fit and a spline fit are different and need to be controlled differently. This is unfortunate because although a spline fit often gives a better result, what you are doing when controlling that result is less obvious.
This brings us to the Model Parameters (2) section of the interface. There are two parameters important for our immediate purpose:
Minimum sample fraction is yet another way of controlling whether a sample box gets generated or not. If the number of pixels in a sample box that are actually used to come up with the intensity estimate are not at least as high as this fraction, the sample box will not be used. The default of .05 is quite liberal. If a sample box was 10x10, then only 5 of the 100 pixels would need to be usable for the box to be acceptable by this criteria. In general, the Tolerance and Shadows Relaxation parameters are more useful, but this parameter does have the advantage that it is easily understood.
Continuity order is the DynamicBackgroundExtraction equivalent of the Function degree parameter in ABE. However, it works a little differently and is more difficult to understand, because it works on splines. You can think of Continuity order as controlling how quickly the splines adapt to local variations in the gradient. Increasing this makes it more adaptable but also will often lead to undesirable artifacts. It can also lead to unstable results. Unlike the ABE function degree parameter, you will usually leave this parameter alone. The default Continuity order value of 2 is the minimum allowed.
If I had my way, DBE would have the option to use polynomial fitting instead of splines. I find Function degree far more useful.
The following image compares using Continuity order 2 and Continuity order 5 with a Smoothing factor of zero as in the image above on the left. This is a case where the resultant gradient extracted at Continuity order 5 is unstable and indeed, completely wrong and unusable.
For those who really want to understand spline fitting vs polynomial fitting, I refer you to the Wikipedia article on spline fitting:
https://en.wikipedia.org/wiki/Spline_(mathematics)
Fortunately for the rest of us, the details aren't particularly important. What is important is that a polynomial fit and a spline fit are different and need to be controlled differently. This is unfortunate because although a spline fit often gives a better result, what you are doing when controlling that result is less obvious.
This brings us to the Model Parameters (2) section of the interface. There are two parameters important for our immediate purpose:
- Minimum sample fraction
- Continuity Order
Minimum sample fraction is yet another way of controlling whether a sample box gets generated or not. If the number of pixels in a sample box that are actually used to come up with the intensity estimate are not at least as high as this fraction, the sample box will not be used. The default of .05 is quite liberal. If a sample box was 10x10, then only 5 of the 100 pixels would need to be usable for the box to be acceptable by this criteria. In general, the Tolerance and Shadows Relaxation parameters are more useful, but this parameter does have the advantage that it is easily understood.
Continuity order is the DynamicBackgroundExtraction equivalent of the Function degree parameter in ABE. However, it works a little differently and is more difficult to understand, because it works on splines. You can think of Continuity order as controlling how quickly the splines adapt to local variations in the gradient. Increasing this makes it more adaptable but also will often lead to undesirable artifacts. It can also lead to unstable results. Unlike the ABE function degree parameter, you will usually leave this parameter alone. The default Continuity order value of 2 is the minimum allowed.
If I had my way, DBE would have the option to use polynomial fitting instead of splines. I find Function degree far more useful.
The following image compares using Continuity order 2 and Continuity order 5 with a Smoothing factor of zero as in the image above on the left. This is a case where the resultant gradient extracted at Continuity order 5 is unstable and indeed, completely wrong and unusable.
At this point you may be wondering why on earth you would ever want to use DBE instead of ABE and that is a fair question. You might be interested to learn that many experienced users of PixInsight primarily use DBE. Why?
So far, we have been using automatically generated points for our sample boxes.
The real power of DBE over ABE is that YOU get to choose where to place the sample boxes and not the automation.
This point is truly important. The trained human eye is pretty good at discerning what is truly background and what is not and placing sample boxes in those background areas. Also, and this is important, with DBE a few well chosen points is often better than many points. True gradients are often pretty simple things, and you don't need a lot of sample boxes to represent them.
Let's reset some of the parameters. Press the Reset button on the lower right of the interface (for arrows point at the same place). Set sample radius to 11 if it is some other value. Set Correction back to Division. Now click on the image in the upper left. The result should look something like this:
So far, we have been using automatically generated points for our sample boxes.
The real power of DBE over ABE is that YOU get to choose where to place the sample boxes and not the automation.
This point is truly important. The trained human eye is pretty good at discerning what is truly background and what is not and placing sample boxes in those background areas. Also, and this is important, with DBE a few well chosen points is often better than many points. True gradients are often pretty simple things, and you don't need a lot of sample boxes to represent them.
Let's reset some of the parameters. Press the Reset button on the lower right of the interface (for arrows point at the same place). Set sample radius to 11 if it is some other value. Set Correction back to Division. Now click on the image in the upper left. The result should look something like this:
Note the interface now says there is Selected Sample: 1 of 1. In that same portion of the interface is a box that shows the pixels within that sample point. Note how many pixels are completely white or completely black. This is because our tolerance value and other parameters controlling whether Sample Boxes are used have been reset. If the Tolerance parameter is set very low (.1) then the box showing the pixels in the Sample Box will turn almost entirely black with rejected pixels. The sample box itself will turn red on the inside.
DBE will not prevent you from placing that point, but it will let you know with the red color that the box will not be used. If we increase the Tolerance parameter to something like 5.5 the box will turn green again, and the display of the sample box in the interface will look quite different.
The lack of blacked out pixels is letting us know that almost all the pixels in the box were accepted and are being used to generate the intensity value for the box.
Now deliberately hand place a point on a medium to smallish size star. Leave Tolerance at 5.5.
Now deliberately hand place a point on a medium to smallish size star. Leave Tolerance at 5.5.
The software has correctly discerned that the brighter parts of the star is outside the tolerance bounds and it shows that by indicating where those rejected pixels are by drawing them in black. That sound great, but look carefully around that black area and you will notice there are darker grey pixels that have not been rejected but that are part of the star. If we use this point with this Tolerance parameter, then the software will decide the gradient at this point is fairly bright and the gradient that is calculated will be wrong.
It is important when hand placing points to look for this tell tale indication you have placed a point on a star. If you see you have done so, delete the point. You can delete the last point you created by simply using the delete key on your keyboard. If you wish to delete an earlier point, then you can select a point by clicking on it and the hitting the delete key. I have deleted this point.
Because we are hand placing our points and because we are watching to make sure we don't place those points on stars or other structures like nebulae, we can use relatively high Tolerance parameters like 5.5.
When placing your own points, you will usually want to increase the Tolerance parameter.
In keeping with the earlier rule that trying to capture the gradient with just a few well placed points is generally superior to placing many points, lets add just a few more. Near the corners are generally good locations if you can avoid stars and nebulosity. In this case points in the upper left and upper right are OK. But the lower left contains a star, and the lower right is very close to a bright star that will also throw things off. But we can move slightly away from them and place points as well.
Here are where I placed my 4 points.
It is important when hand placing points to look for this tell tale indication you have placed a point on a star. If you see you have done so, delete the point. You can delete the last point you created by simply using the delete key on your keyboard. If you wish to delete an earlier point, then you can select a point by clicking on it and the hitting the delete key. I have deleted this point.
Because we are hand placing our points and because we are watching to make sure we don't place those points on stars or other structures like nebulae, we can use relatively high Tolerance parameters like 5.5.
When placing your own points, you will usually want to increase the Tolerance parameter.
In keeping with the earlier rule that trying to capture the gradient with just a few well placed points is generally superior to placing many points, lets add just a few more. Near the corners are generally good locations if you can avoid stars and nebulosity. In this case points in the upper left and upper right are OK. But the lower left contains a star, and the lower right is very close to a bright star that will also throw things off. But we can move slightly away from them and place points as well.
Here are where I placed my 4 points.
And here are the resultant gradient and image.
This is actually a pretty nice result and the best we have gotten so far using DBE. And it required just 4 points! There are still some nits we can try to fix. For example, the upper right corner darkens slightly going into the corner. I'll place a few more points to correct that.
This result while not perfect is now very close to correct, and has the benefit that the gradient we are removing is almost certainly just that, gradient, and not signal.
You should now save this result as we will need it in the future. I called mine L_goodDBE and I saved it in FITS format.
There are a couple more things to mention for the sake of completeness. First if you select a sample box point, and keep the mouse button down, you can move it by dragging the sample box to a new location. This can be handy when trying to avoid stars.
Second, it is possible to hand place points that have either linear or circular symmetry. While neither of these things is useful with this image, it can be very useful indeed when dealing with imperfect flats. If you click and drag on the cross hairs going through the image, you can move them. I moved the cross hairs up to a bright star in the middle of the one nebula. I then added a point, selected the Axial Checkbox under Symmetries and increased the number of points to 20 to make the result more circular. I ended up with this.
You should now save this result as we will need it in the future. I called mine L_goodDBE and I saved it in FITS format.
There are a couple more things to mention for the sake of completeness. First if you select a sample box point, and keep the mouse button down, you can move it by dragging the sample box to a new location. This can be handy when trying to avoid stars.
Second, it is possible to hand place points that have either linear or circular symmetry. While neither of these things is useful with this image, it can be very useful indeed when dealing with imperfect flats. If you click and drag on the cross hairs going through the image, you can move them. I moved the cross hairs up to a bright star in the middle of the one nebula. I then added a point, selected the Axial Checkbox under Symmetries and increased the number of points to 20 to make the result more circular. I ended up with this.
This isn't actually a useful thing to do here, but sometimes the vignetting of a scope isn't completely corrected by a flat, and then it can be very useful indeed. Here is another example where I have added a diagonal symmetry.
You can experiment with using Horizontal and Vertical Symmetry as well. Unfortunately, there can be only one symmetry center for the model. If you need something more complicated than that, you will have to correct one thing at a time.
In this final example, I have added another point for more Axial symmetry and moved the center of symmetry by moving the cross hairs again.
In this final example, I have added another point for more Axial symmetry and moved the center of symmetry by moving the cross hairs again.