An Insight to PixInsight

A colleague and fellow astroimager (Welford Observatory), with whom tips and tricks in the coffee lounge at work are exchanged, suggested I try PixInsight (PI) for processing some astroimages. I tried the demo version once before and I, like many others before me, found it very complicated and daunting and gave up quickly.

He pointed me towards Harry’s Astro Shed as he has posted various get you started video tutorials for PI. So I bit the bullet, paid for a licence (£lots) and got stuck in.

Yes, it’s still very complicated but with Harry’s help I also discovered it’s very powerful. The calibration routines for applying bias, dark and flat frames to the raw data are the best I’ve ever used. Move over Nebulosity 3 you’ve been outdone.

After practicing on some old images it was time to try it out on some new data using the recommended workflow.

The first thing to do was to take a new library of dark frames. They shouldn’t change from one imaging session to another so you can take them, create a master for different exposures times and they can be used over and over again. I did these on a cloudy night, a bit time consuming but the system can be left to do its thing.

The next night was clear so I decided to have a go at the region around the Horsehead Nebula in Orion. This is a region full of nebulosity which responds well to H-alpha filters. I took 36 exposures of five minutes each, not really long enough for faint nebulosity but I wanted to get as many exposures as possible in one go as a test of PI.

I’m often a bit lazy when capturing flats but I was careful to make sure I got good ones and then calibrated, registered and integrated the images in PI. With some histogram stretching, denoise and contrast enhancement I’ve ended up with possibly the best astroimage I have ever taken.

Horsehead Nebula Region

Be sure to click through to Flickr and see the full resolution version.

I’m now a total PI convert, it’s expensive and complicated but now I’ve got a hang of the basics there’s no stopping me now!


(357439) 2004 BL86

There are currently (as of 11th Feb 2015) 1544 Potentially Hazardous Asteroids (PHAs). These are small bodies that have the potential to make threatening close approaches to the Earth.

Most are very small, from a few metres to a few tens of metres across, but there are quite few larger ones a few kilometres across. Clearly if one of these larger bodies hit the Earth it would make a mess. A big mess. Fortunately no PHAs are known to be on a collision course with the Earth.

2004 BL86 is one such PHA which was thought to be around 600–700m across. It’s orbit is known well enough that it has been given a numeric designation (it’s name means it was discovered in 2004 and BL86 is a sequential tag indicating it’s order of discovery in that year).

On 26/27 January 2015 it made a close approach to Earth, about 3.1 times the Earth-Moon distance away at closest. Small asteroids come close to Earth all the time, often closer than this, but what made this one unusual was the size of the object. This large size meant it would be bright enough to be readily visible by amateur astronomers.

As it approached Earth professional observatories started observing the asteroid using radar and it was discovered that it was an almost spherical object (unusual for small bodies) around 325m across. They also discovered it has a small moon (this is not unusual for PHAs).

After observing the asteroid visually through my telescope (likely to be the highest numbered asteroid I will ever view visually) I took a couple of videos using my asteroid occultation setup and a sequence of 20s images taken 20s apart. Once aligned and stacked these images show the asteroid’s movement as a dashed line.

(357439) 2004 BL86
The asteroid appeared about as bright as a 9th magnitude star (typically stars down to 6th magnitude are visible naked-eye from a very dark site).

Horsehead Nebula

B33 – The Horsehead Nebula

Imaging opportunites have been few and far between this autumn. Whenever it has been clear there has been a bright Moon, it has been too windy or I’ve been busy. But at last we had a clear night with no Moon and I could finally image something!

My choice was the Horsehead Nebula in Orion. An object I’ve not imaged before. The Horsehead is a famous dark nebula (Barnard 33) shaped like a horse’s head which is silhouetted in front of a huge bright emission nebula (IC 434 or Sh2-277) known as the Flame Nebula. The whole complex surrounds Alnitak which is the lefthand star in the distinctive line of three stars that make up the Belt of Orion. The Horsehead is found just below Alnitak.

I imaged using my QHY22 camera and a H-alpha filter. This filter only transmits light from a specific deep-red visible spectral line. This light is emitted when a hydrogen electron falls from its third to second lowest energy level. H-alpha light is interesting to amateur astronomers as it’s emitted by emission nebula and local light pollution (even moonlight) won’t interfere with the imaging. By imaging using a H-alpha filter you can get great images of nebulae with high contrast. The payoff is that you need long exposures (typically 5-15 minutes each) and for that you need really good auto-guiding.

Auto-guiding involves using a second telescope mounted in parallel to the imaging telescope with a second camera taking continuous short exposures (typically 1-2s long). Using a clever bit of software the position of a ‘guiding star’ on each exposure is compared to the previous and if it has moved due to errors in the telescope tracking the rotation of the Earth the software automatically applies a correction to the position of the telescope so that the star remains exactly in the same place whilst you are imaging. Without guiding you are limited to shorter exposures of around one minute each as the errors in tracking add up and the stars will start to trail slightly.

My guiding setup has been working really well since I set up the Starshed Enterprise and I can usually get five minute exposures with no visible star trailing. To reduce noise in the resulting image you need to stack as many exposures as you can get and I typically aim for a minimum of 20 exposures. Twenty exposures of five minutes each is around two hours of imaging including taking dark frames for calibration.

It’s preferable to get all of the imaging done before the object crosses the meridian (due South) as although you can continue imaging for a while afterwards eventually the telescope tube will hit the mount and you have to perform what is called a ‘Meridian Flip’. This involves re-pointing the telescope from the eastern hemisphere to the western hemisphere and is a bit of a faff. On this evening I started early enough and the Horsehead was far enough east that I didn’t need to perform a flip for two hours worth of imaging.

In the end I managed to get 19x300s exposures. I had to stop as the secondary mirror was completely dewed up (a bit like how a bathroom mirror steams up after a shower). I could have cleared it with a quick blast from a 12V hairdryer but the whole observatory was dripping with dew and it was close to flip time so I packed up. I need to experiment with heaters or fans to prevent dewing up of the secondary as it has been a bit of a pain.

Capturing, stacking and processing was all done in Nebulosity 3. The resultant image is still a little bit noisy (grainy) but I’m pretty pleased with the result. I plan to capture some more exposures at some point to improve the image further. I might even be able to take some of the surrounding area and create a mosaic image as the nebulosity stretches way beyond the field of view of my setup.

The Tulip Nebula

Sh2-101 — The Tulip Nebula

Emission nebula in Cygnus. Taken with 300mm F/4 Newtonian telescope with QHY22 camera 2×2 binned. 12x300s exposures with H-alpha filter and TS Coma Corrector. Autoguided with QHY5-II. Captured and processed in Nebulosity 3.

Sh2-101 — The Tulip Nebula

M51 – Whirpool Galaxy

I’ve been taking some images from the Starshed Enterprise when it has been clear over the past few weeks. Experimenting with different settings etc. I’ve now got a nice system going, autoguiding is working well and I’m starting to get some nice images.

Messier 51 (Whirpool Galaxy) was the first galaxy shown to have a spiral structure when in 1845 Lord Rosse observed it with his giant 72″ telescope in Ireland. It is found in Canes Venatici below the tail of Ursa Major (the handle of the Big Dipper) so is well placed for observing at this time of year as it’s almost directly overhead. It has a smaller companion which it is interacting with (NGC 5195) seen as the compact galaxy at the end of a dark dust lane in one of the spiral arms. Tongues of material are being thrown out from the system as they interact, three distinct fingers can be seen stretching upwards in my image.

M51 - Whirlpool Galaxy

Supernova 2014J

M82 - Supernova 2014J by jochta
M82 – Supernova 2014J, a photo by jochta on Flickr.

A bright supernova in M82 was discovered on January 21st, the closest Type 1a supernova for 40 years. Here’s my image of it on January 25th when it was about magnitude 11.0. I have the collimation of my new 12″ telescope sorted now and this is a stack of 15x60s images captured in Nebulosity using an Atik 16IC-S camera. The supernova is below-right of the centre of the galaxy in the centre of the image.

Imaging Jupiter – My Methodology

Jupiter will be at opposition (opposite the Sun in our sky) in late October. This is when it is closest to Earth and therefore largest and brightest. It’s around this time it becomes very noticeable as a bright yellowish beacon in the east as it gets dark. You cannot mistake it for any other object as it is far far brighter than any star.
This is also the time when every astrophotographer tries to get good images of Jupiter (and its four Galilean Moons). I’m no exception. I’ve taken a few images of Jupiter before but I don’t really have the right telescope for planetary imaging. Telescopes with long focal lengths (high focal ratios) are usually better and I have a short focal length widefield telescope as my main instrument.

This doesn’t deter me from trying of course and I have a DMK 21AU04.AS camera for just this job. The camera takes monochrome avi videos of whatever it sees and by taking the individual frames from the video and stacking only the very sharpest you can beat the seeing. Seeing is where the object you are watching wobbles about and flicks in and out of focus due to air currents in the atmosphere.

The other evening (24th September) I used the DMK camera, my 200mm F/4 Newtonian telescope, Astronomik RGB filters in a filter wheel and a Televue 5x Powermate to take three videos of Jupiter of 1000 frames each at 30fps. One through a red filter, one through green and one through blue. This gives a decent number of frames to work with in a not too massive file. The Powermate makes Jupiter a decent size on the imaging chip, we want to cover as many pixels as possible to get the maximum amount of detail.

By stacking the best 1/3 of the frames in Registax software I obtained three monochrome images of Jupiter. The clever bit comes when you combine these three images together, making the monochrome image taken through the red filter monochrome red, green as green and blue as blue. When you do this, as if by magic you get a colour image of Jupiter! I used Astra Image 3.0SI software to do the combining.

After creating the RGB image I did a small amount of post-processing in Astra Image 3.0SI. Deconvolution works wonders to bring out the sharpness, followed by a little curves and colour adjustment. This is mostly trial and error and personal preference, it’s easy to overdo post processing and end up with an over sharpened or too saturated image if you’re not careful.

Voila, one pretty decent colour image of Jupiter, showing details in the cloud bands and the Great Red Spot (which is actually a pale pink colour).


Notice the less abrupt edge to Jupiter on the right-hand side. Before opposition we are able to see slightly around to the night-side of the planet here and the cloudy atmosphere of the planet means there isn’t a sharp edge between night and day like there is on the Moon.

This is probably the best image I’ve taken of Jupiter so far helped by the exceptionally good seeing on this evening. Hopefully we will get more good evenings as opposition approaches and Jupiter gets a little bit bigger. I will be out trying to better this and/or get some satellite events too, e.g. moon shadows and transits.

Thanks to the power of twitter and retweets this image has had over 1500 views on Flickr!

New Astrogallery

M33 - Traingulum Galaxy
OK, I’m happy to release my new Astro Images gallery now. I’m using Flickr to store images, sharing, comments etc. I’ve written a WordPress plugin which uses the JSON feed from Flickr to display the images here.

I’ve not moved all the images across to Flickr yet but I’m also having a bit of a cull as some of my earlier images, especially of Messier objects, don’t really look that good anymore. I’m planning to take better images of the objects this year. I’ve also reorganised the images into (I think) better categories.

So, over the next few weeks I’ll be putting up old, reprocessed and new images onto Flickr and they will appear in the Astro Images gallery on here.

Of course you can skip all of this and just go straight to my Flickr gallery if you want!


My latest astroimaging acquisition is a Hydrogen alpha filter. This filter has a very narrow bandpass and only allows light within 13nm of a wavelength of 656nm to pass through it (normal human vision at night is between 400 and 600nm). Some types of nebulae (emission nebulae, planetary nebulae and supernovae remnants) glow particularly strongly at a wavelength of 656nm due to the excitation state of the hydrogen gas in the nebula.

The advantage of using the filter is that it cuts out all light pollution and all ‘visible’ light and only allows the nebula and stars to show. The CCD chips in the cameras are sensitive to this light so you can achieve very high contrast images of nebula, impossible to achieve with filters that allow visible light through. The images are inky black where there is no nebula so even the faintest wisps can show.

Over the coming weeks and months expect to see images taken using this filter appear in the astroimages gallery.


I’ve invested in some colour filters and a filter wheel for my astroimaging setup so I’m going to be posting some colour astroimages up over the coming months, I’ve already posted my first LRGB image of M27 into the gallery.

I thought I’d take a few words to explain how this works. The CCD camera I have is monochrome, these are generally better then singleshot colour cameras as they have a higher resolution and there are no filters in front of the chip. So to get monochrome images it’s just a case of capturing multiple exposures and stacking them in software to increase the signal to noise ratio. To get colour you need to take monochrome images through red, green and blue filters. The filters are very precisely made so they only pass through the correct wavelengths, they also block any infrared light which the cameras are sensitive to and can cause problems. They are also manufactured to ensure that they focus the light from the telescope to the same place, so you don’t have to refocus when you change filters.

The filters are held in a filterwheel, this is a mechanical device driven by batteries that rotates the filters into the lightpath at the push of a button. So there’s no requirement to dismantle the setup to put in the next filter.

So what is LRGB? An LRGB image is made up of Luminance data (monochrome), Red, Green and Blue data. What you do is capture a lot of high quality monochrome data. This provides all of the detail in the final image. You then capture some data through each of the coloured filters, this data can be with much shorter exposures and far lower quality. This colour data can then even be binned, i.e. each square of 4 pixels is summed together to make 1 pixel.  You can also blur it with a Gaussian blur filter to reduce colour noise in the final image. Software is used to combine the three images taken through the coloured filters into what looks like a blurred, low resolution colour image of the object.

Now comes the clever bit, the human eye is really good at picking out detail in monochrome images, it’s rubbish with colour. So what you do is layer the colour behind the monochrome (luminance) data. Lo and behold you have a high resolution colour image!