Analysis of Images

Introduction

While some keen amateur astronomers do actually use properly calibrated filters and special detectors to determine the magnitude and colour of stars, I was interested to see what results I could achieve using more straightforward methods. This involved utilising the filters which are built into a digital camera in order to direct the incoming light onto separate elements of the sensor in order to generate the red, green & blue components necessary to build up the digital image. In other words, I just used the RGB values for the pixels I was interested in, simply assuming that the filter characteristics bore a reasonable similarity to the UBVRI system. Even if this assumption was not valid, I hoped that at least the camera's detectors would be constant from image to image, thus giving results that could be directly compared to "professional" observations even if they were not exactly equivalent to them.

My Methodology

I used a constant exposure time of 4 seconds at 800ASA (the longest the camera would allow at this sensitivity), which proved more than adequate to capture the nova and the surrounding stars. The resultant brightness levels varied somewhat from image to image, reflecting the different seeing conditions, but this could be compensated for by slight adjustment in the graphics program: I didn't try to match each image precisely though as I was only interested in relative values at this stage. I processed each image by converting to grey-scale and recording the resulting brightness of the nova and each of nine nearby stars (eleven when the nova had faded) whose magnitudes I knew from my planetarium program. I then plotted (on a scatter-diagram in Excel) the brightness of each of the comparison stars against its magnitude and asked the program to calculate a trendline. A linear fit was entirely satisfactory, indicating a direct relationship between the star's brightness on the image and its magnitude, as I had hoped. [Anyone who has read my article on stellar magnitudes might find this an unexpected result, as the magnitude system has a "power" relationship with brightness, not linear. I must therefore assume that the brightness response of the camera is tailored to reflect the non-linear response of the eye (which is the reason for the magnitude scale being defined the way it is), resulting in a linear response overall]. I then inserted the brightness value for the nova into the equation of the trendline to calculate its magnitude.

The results for the images taken on a given night were reasonably consistent (usually to about +/-0.1 magnitudes) but to increase the accuracy of the night's magnitude estimate I averaged the individual results. It was sometimes necessary to exclude results from the averaging process, usually because the brightness/magnitude plot had excessive scatter or because the slope of the graph was well outside what I grew to recognise as "the norm". A fairly constant value for the slope is to be expected (as the characteristics of the camera sensor are presumed to be constant from image to image and day to day) so I felt it was reasonable to exclude a result derived from a plot with a widely different value of slope. Possible reasons for why the image should give an anomalous result might be thin or variable cloud, variation in seeing, or a slight "glitch" in the camera (particularly its image-stabilising mechanism). I performed the averaging in two ways: a simple average of the individual results and a value derived from a new brightness/magnitude plot whose points were the averages of the corresponding data points of the original plots. I was not initially convinced that the second method would give valid results, but in fact the two averages were always very close - the maximum error was just 0.03 magnitudes.

At the beginning I was only interested in obtaining a simple magnitude estimate, but when I noticed that the nova was reddening as it faded I decided to try a full RGB analysis. This would clearly require each image to be of equal overall brightness so before analysis I adjusted this in my graphics program until the brightest comparison star had the same grey-scale value in each case. I then undid the grey-scale conversion and read off the RGB values for the brightest pixel in the image of the nova (this was the pixel used to find its grey-scale brightness in the preceding calculation). The values for all the images I had previously used to estimate the magnitude on a given day were then averaged before plotting them, together with an average of the grey-scale-converted values: the upper of the two graphs on the previous page shows the result. The lower graph was derived by calculating the ratio of each of the R, G & B values to the corresponding grey-scale value, taken as unity: values greater than 1 indicate an excess of that colour.

As a check on the RGB results, I plotted the grey-scale values derived from them against the magnitude estimates originally obtained. If the RGB data was a true representation of the brightness/magnitude data, the plot should be a straight line with the same slope as the graphs derived from the original images. In fact, the plot was well-fitted by a linear trendline and had a slope of -0.0200 compared to a median slope of -0.0196 for the brightness/magnitude graphs. I thus felt that the two analyses were entirely consistent.



Back