I always hear that photos of cosmological objects (like that photo of the Ring Nebula in your previous post) have been enhanced in some way. What would those things look like if we just saw them exactly as the telescope or camera picked them up? And when the photos are enhanced, are they always enhanced the same way, or does the formula vary?So, it's totally true, the final images given out for press releases have usually been heavily processed from the original raw images. Just like the pictures you took at a party where you throw them into photoshop and remove your friends' red-eye, there are a set of "standard" processing steps for astronomical images.
Finally, does the enhancing serve some scientific purpose, or is it done basically to make the pictures prettier? (I suppose that's a scientific purpose too, in the long view, since pretty pictures make it easier to get funding. Don't worry, I won't tell!)
That said, there's usually not outright deception the way advertisers airbrush images - no astronomer is going to try and make their planet look skinnier or add lolcat tags. To understand this a little better, let's talk about how modern astronomical images are actually taken.
First off, almost all optical images are taken with a Charged Coupled Device (CCD) mounted to the back of a telescope. This is the same kind of chip that's in the back of your ordinary digital camera, albeit more sensitive and more expensive. Essentially, it's just a thin piece of silicon divided into a narrowly-spaced grid of cells. Each cell in the grid can hold electrons which might get excited when a photon hits them. At the end of an exposure, each cell reports how many energetic electrons it contains. Our image just translates each cell into a pixel, and the brightness of that pixel is just how many electrons it contains.
Now, notice there's absolutely no color information here. The CCD just reports the number of excited electrons, and doesn't know anything about whether it was a red photon or a blue photon which excited it...so this just produces a black & white photo. This means we have to use filters if we want to get any color information. If we put, say, a red filter on our CCD before taking the image, then we know only red photons can get through.
So, first we take an exposure with a red filter, then another with a green filter, and then another with a blue filter. We combine them all at the end to produce our fancy color image.
Okay, you're probably already asking, "then how does my digital camera takes color photos all at once without any color filters?" The answer is that it uses filters all the time - here's a schematic of the filter mosaic used in most digital camera CCDs. By filtering alternating pixels with different colors, in only one exposure the camera can get an image in each filter...albeit at lower resolution than the entire grid. The fancy camera software then interpolates these separate staggered color images to produce a single color image.
So with all this said, let's take a look at an actual single raw image of a galaxy:
You'll want to click on the image above to look at the original with all its glorious artifacts. Let's also take a look at close-up with some artifacts highlighted:
So, there's several issues we have to contend with to make this into a "pretty picture".
In red, I've highlighted a particularly annoying cosmic ray trail (though they're all over the image). Unlike digital camera photos which only open the shutter for a fraction of a second, astronomical images - particularly of faint objects - can be upwards of an hour long. During this time, high-energy particles known as cosmic rays - which are always whizzing around - have a much greater chance of interacting with your CCD and exciting electrons completely independent of any photons coming through the telescope. The annoying ones come in at an oblique angle to the CCD, leaving a trail of excited electrons across the chip. The even more annoying ones do this directly over the CCD cells you're using to capture an image of your object. Thankfully, there are some pretty good cosmic ray removal packages out there which use sophisticated image detection algorithms to remove this...so that's a processing step right there.
In blue, I've highlighted pixel bleed. We're going for a long exposure of a pretty faint galaxy here, so any bright stars in the field will become oversaturated. In essence, the CCD cell containing the image of the bright star begins to overflow with energetic electrons, pouring them out into adjacent cells.
In green, I've highlighted a row of bad pixels. With millions of cells across the entire chip, statistically many are eventually going to fail. For earth-based observatories, it's untenable to keep throwing out CCDs which cost many thousands of dollars whenever some pixels go out...so you work around it. For spacecraft, meanwhile, there's really nothing you can do about bad pixels even if you had the money to replace it.
There's a couple other artifacts noticeable in the original image, as well. Notice the steady gradient of dark-to-light in the background. Unfortunately, not all the pixels have the same sensitivity. Send 100 photons to one cell, and you might get 50 excited electrons...send them to another cell, and you might only get 40.
You have to account for this by taking "flat fields". Essentially, you take images (ideally just before or just after taking your astronomical images) of a uniformly lit surface with each color filter. The idea is that the surface should be sending out a constant number of photons to each cell, so the only signal you'll see will be the change in sensitivity across the CCD. You then divide the astronomical image by the flat field on a pixel-by-pixel basis to remove this sensitivity effect. Finding a truly flat field, though, can be a chore in itself...often times the best flat field you'll get is an image of the twilight sky before the stars come out.
Another artifact you may notice in the original is the weird wavy pattern, particularly noticeable on the left. Ideally you want your CCD chip to be as thin a piece of silicon as possible - this makes it more sensitive. However, particularly for longer wavelengths of light, photons reflecting off the back surface of the CCD can interfere with photons hitting the front surface and produce thin film interference - very similar to the wavy colored patterns you'll see in soap bubbles or with oil on water. Hopefully this too will be removed by flat-fielding.
Finally, as for the purpose of enhancing images, all of the above steps are necessary to get good science. Otherwise, you're just measuring your signal buried in a whole lot of noise. If you're going to take an image this far, though, you might as well go one step further to make a press release photo.
This serves several purposes, but not least of which is to share your own fascination of an astronomical object with the general public. Imagine if the Hubble Space Telescope *never* made press release photos available and only was used for hard science in the journals...public support wouldn't be nearly what it is today. Besides, it's the taxpayer's dollar which goes to fund it - the least we can do is give them some pretty pictures in return.
So, if you want to make a pretty picture, there's one more step you'll have to take - and this is a big one - because the above image was taken through an infrared filter. By definition, the human eye can't see this wavelength of light, so if we were to represent it in "true-color", the entire image should be black.
Creatively mapping various single filtered images to RGB space as well as some tweaking of colors needs to happen to for this to be visible - and aesthetically pleasing - for human vision. This color manipulation doesn't have the same tried-and-true formula as the above sequence of processing steps, and is often just manipulated until one gets something that just "looks good".