After quite a few successful attempts at imaging trails using rudimentary equipment, I decided to put my Raspberry Pi B+ and its camera to work and image a moontrail.
Past attempts include a set of sunrises at both solstices and at equinox, some sunsets, an analemma and some startrails. For these I used a Canon A800 enhanced with CHDK or a FujiFilm HS20EXR unmod. The CHDK was needed to access otherwise unaccessible options like manual focus, manual shutter speed and… well, an intervalometer. But this time a cron job did the work on my little buddy, the pi.
Below I describe the process in detail which lead to the final pictures. If you wish to skip it, click here.
Prepping
This is no big deal. I (kinda) know my local sky and I (kinda) know the field of view of the rpi’s cam. I put the little fellow to a fairly stable tripod and went inside to ssh to the device. I had to do some experimenting with the shutter speed, but the final script was this, ran every minute:
#!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") raspistill -ss 2500 -ISO 100 -rot 180 -o /var/www/moon_trail_$DATE.jpg
The first frame I could identify the moon on was taken at 20:28 (June 1) local time (Kolozsvár, Romania, UTC+3). The last at 0:28 (June 2).
The raw results
The output of the setup were – obviously – individual frames which needed to be assembled one way or another. Here’s a sample of the raws (718 frames were recorded, but not all of them had the moon :P ), all being taken at full resolution (2592×1944).
Assembling the final results
One might think having the raw input means the job is done. Well, wrong. Adding all the pictures ie all the pixels as it would happen in case of a super long exposure (see solargraphs) yields no usable results. Frames taken before dusk had to be masked before processing, to say the least. There is also the problem that my raspberry sometimes ignores the white balance argument so I omitted it getting some weird looking frames (experience tells that would have gotten them with -awb sun
too). And since this means editing is inevitable, it also opens broad roads to achieve some pleasing outcomes. So after masking I added all the frames (modulo from 1 to 15) and then chose a background.
In order to optimize a bit the assembling process, I limited the area it had to go through.
function is_pixel_needed($x, $y){ $height = 1944; $width = 2592; $ax_b = -0.4895*$x + $height - 514.4699; if ($y < $ax_b) return false; if ($y > $ax_b + 330) return false; return true; }
The above code gives the green area to be processed:
And the final picture will be the max() of the frames (if lighter than filter):
final_pixel[x,y] = max(frame[1][x,y], frame[2][x,y] ... frame[n][x,y]);
Outcome
After getting the trail output, also having many background candidates, I used editing software to get a pleasing picture.
Some might argue that the pictures below are not „real pictures” but digital paintings – that’s partly true, mostly false. Note that the functions used to combine the individual exposures are (mostly) analogous to exposing more than one time to the same film frame. On the other hand I like to play, though never deceivingly. I’d say that given the equipment, these pictures are as „real” as they can.





