Sample The Best - Image Resampling

[DTP Articles]
[Vector Articles]
[Bitmap Articles]
[Web Articles]
[3D Articles]

[Free Photos]
[Colour Management]
[Photo Sharing]
[Drupal Tutorial]
[Introducing CMS]
[Photo to Art Tutorial]
[3D Rendering Tutorial]
[Future of Web Design]
[Google Bourbon]
[Google Background]
[Adobe-Macromedia]
[Live Paint Live Trace]
[3D Materials]
[Extending Bitmaps]
[Image Modes]
[Elements 3]
[Flash Video]
[Flash Bitmaps]
[CD Authoring]
[Mac PC History]
[Best 3D App ]
[Photo Edit Roundup]
[Photoshop Patterns]
[Web Success]
[Texture Library]
[Web Stats ]
[Best Web App]
[Best Creative App]
[3D to 2D]
[Corel or Not?]
[Photo Bargains]
[2D to 3D]
[Creative Suite]
[CSS Positioning]
[Tablet PC]
[Pen Input]
[Acrobat Directions]
[HTML Tables]
[Illustrator_Plug-ins]
[RGB to CMYK]
[Filter Tips]
[Understand CSS]
[Photoshop Filters]
[Flash Usability]
[Web Fonts]
[2002 System]
[Font Formats]
[MX Explained]
[DTP Tagging]
[Image Resampling]
[Image Resolution]
[Data-Driven Design]
[Digital Video]
[Windows XP]
[Paper to PDF]
[Flash Animation]
[Page Imposition]
[Design Roundup]
[Graphic Roundup]
[eBook / ePaper]
[Image Management]
[Removable Storage]
[Understand Halftones]
[Web Buttons]
[XML Revolution]
[Bitmap Vectors]
[Image Enhancement]
[Windows 2000]
[PDF Workflows]
[Flash v LiveMotion]
[Advanced File Formats]
[Design Companies]
[Dynamic Filters]
[Site Structuring 2]
[BMP/TIFF/JPEG/GIF]
[Site Structuring 1]
[Image Hoses]
[Text Typography]
[Producing Panoramas]
[Spot Colour]
[SVG Format]
[Design Sites]
[Database Publishing]
[Vector Brushes]
[Layout Compositing]
[Brush Control]
[Web Repurposing]
[Digital Cameras]
[Automating Scripts]
[Stock Photography]
[Quark v Adobe]
[Bitmap Editability]
[1998 Applications]
[Photoshop 5 Colour]
[Asymmetric Grids]
[Bitmap Masking]
[Bug Hunting]
[Commercial Print]
[Vector Transparency]
[Scanning The Facts]
[Colour Print]
[Image Management]
[Preparing Print]
[Understanding EPS]
[DTP Grids]
[Layer Handling]
[NT or Not?]
[Hardware 1997]
[Microsoft Publishing]
[3rd Party Solutions?]
[Beyond CMYK]
[Acrobat v Immedia]
[Postscript 3]

[3D Rendering Tutorial]
[Acrobat Tutorial]
[Art Tutorial]
[Colour Management Tutorial]
[CorelDraw Tutorial]
[Drupal Tutorial]
[Flash 5 Tutorial]
[PageMaker Tutorial]
[Photo to Art Tutorial]
[Photoshop Elements 3 Tutorial]
[Photoshop Elements Tutorial]
[Photoshop Basics Tutorial]
[Photoshop Advanced Tutorial]
[Web Dreamweaver Tutorial]
[Web FrontPage Tutorial]
[Web Design / HTML Tutorial]
[Web Graphics Tutorial]


[Home / What's New]
[DTP / Publishing]
[Vector Drawing]
[Bitmap / Photo Editing]
[Web Design]
[3D Software]
[All Reviews]
[All Articles / Tutorials]
[Book Search / Shop]
[Site Map / Search]
[Contact]
 

you can help support the site with a direct donation or by shopping via the following links:

[Amazon.com]
[Amazon.co.uk]
[Amazon.ca]

Thank you!

 

 

Making the most of image resolution through resampling

Tom Arah discovers that the devil is in the detail when you want to make the most of every pixel.

Last month I looked at the issue of bitmap resolution. Unlike scalable vector images, bitmap images are defined as a fixed grid of pixels and so are resolution-dependent. This means that to ensure bitmap image quality you have to ensure that there are enough pixels for your intended target output. For Web output there is a simple one-to-one relationship between each source image pixel and each target screen pixel while for print the complicating factor of halftoning means that you ideally need 300 image pixels per inch for maximum quality output.

This is fine if you know your desired output size and are scanning an original or converting a vector image to a bitmap as you can control the number of pixels as required. In the real world however you don't necessarily know your final output size and are often likely to be dealing with images that have already been digitized. In other words the pixel dimensions are already set and you have to make the most of them. In practice how you go about this tends to split into two camps: resizing images downwards for the Web and upwards for print. As we'll see both processes are intimately related, but it's useful to look at each in turn.

At first sight resizing an image either up or down hardly seems to be an issue. In any photo editor you'll find an Image Size command that lets you change an image's dimensions. As we saw last month, however, changing an image's physical dimensions doesn't necessarily change its pixel dimensions. If the number of pixels remains identical all that is changed is the internal resolution parameter which sets the default size for printing.

Rather than Resizing what we need is to explicitly change the number of pixels in the image, a process called Resampling, or Downsampling in this case as we are reducing size. To do this in Photoshop, you need to select the Resample Image option at the bottom of the Image Size dialog which then lets you change the Width and Height settings to the exact number of pixels required (this can also be set as a percentage of the current dimensions). When you click OK, after some processing, the new smaller version of the image will appear.

An image's pixel dimensions are changed by resampling.

It's worth thinking a bit more about what has happened here. Most users tend to think that this downsizing process is simple and transparent - essentially throwing away the pixels that are no longer needed. If you were halving the image size from 400 x 400 for example you would simply throw away every other pixel. But how would you produce a version of the image with 237 x 237 or 123 x 123 pixels? A little thought though shows that to produce pixel-accurate Web images you need far more flexibility.

Rather than throwing away redundant pixels, the Resampling process actually has to start again from scratch. In effect a new empty bitmap of the desired dimensions is created and the host application then goes back to the original image to work out each of the new pixel's values. In other words you are tearing down the old image and using it to build the new. It's a drastic procedure that shouldn't be undertaken lightly, so only resample when necessary and especially try to avoid multiple resamplings such as those involved in repeatedly resizing an image layer or floating selection.

Resampling is not a simple task and nor is it transparent as its ability to produce arbitrary image sizes means that there can be no one-to-one correspondence between input and output pixels. In fact there are a number of different ways in which the new pixel values can be derived called Interpolation Methods; each of which produces different results and each of which offers its own advantages and disadvantages. I'm going to look at the three main methods as Photoshop presents them from its dropdown list at the bottom of the Image Size dialog but, as these are standards, you'll find them across the board (though often under other names).

The simplest interpolation method is Nearest Neighbour. Essentially the new bitmap grid is overlaid over the original grid and the values of whichever original pixel's co-ordinates is closest to the new target pixel's co-ordinates are used. As well as being much the simplest system in terms of processing, the Nearest Neighbour approach has the immediate advantage that the colours used in the image are unchanged - particularly important when dealing with indexed GIF images where the palette of colours is already limited and fixed.

Sadly the system has just as clear a disadvantage. The co-ordinates of each output pixel are very unlikely to directly overlay an original and within each image some pixels will be a closer match than others. Worse, the way the system works (especially if rotation is also involved for example when you are free transforming a layer) means that while many original pixels are discarded, others may be used more than once. This arbitrary nature of pixel value selection leads to the break up of image features especially angled lines which results in "Stair Stepping" and the dreaded "Jaggies".

When shrinking typical JPEG continuous-tone photographs this probably won't be too noticeable, but with the clearly-defined colours of GIF images it can immediately render the image unusable. In a way this image breakup is unavoidable - with a downsampled screenshot for example, there simply are no longer enough pixels to define legible text. This is why when using screenshots on the Web you should try and use them size-for-size. If you have to use smaller images try cropping them to the point of interest and if you have to downsample, try to use an integer divisor to minimize the damage.

Alternatively you could try another interpolation method. Bilinear resampling also maps target co-ordinates to original co-ordinates but then, rather than simply picking the nearest neighbour, it averages the values of the four surrounding pixels weighted according to their relative distance (you will need to make sure that the image mode isn't Indexed so that the possible colour values aren't fixed). This again has an obvious advantage in that each original pixel's values contribute to the output image so that the arbitrary jaggies are smoothed out to produce anti-aliased lines.

Different interpolation methods produce very different results.

The downsides are just as obvious. To begin with there is the greater processing involved. More to the point Bilinear averaging means that, for a typical continuous tone JPEG photo, every single pixel in the downsized image will have completely different pixel values to those in the original! This is less the case for the typical flat colours of GIF images but here, where different colours meet, rather than a sharp divide there will now be an anti-aliased line where colours blend. As well as introducing often unwelcome softening and blurring this anti-aliasing doesn't suit the GIF compression system and can add enormously to file size.

So far we've concentrated on downsampling for use in Web work. For print work, without the one-to-one image-pixel to screen-pixel mapping, absolutely accurate pixel dimensions aren't essential so manual downsampling is less of an issue. In fact you can usually leave it to your printer driver to automatically downsample your original bitmap especially as it is going to have to do this anyway when it maps the image's pixels to the spots of the halftone screen. The main reason you are likely to want to manually downsample is to avoid unnecessary processing time when editing and printing.

The situation is completely different however when increasing the pixel dimensions of a bitmap image. This Upsampling is a far more common task for the simple reason that many bitmaps, such as web and digital camera images, simply don't contain enough pixel information for their intended print size (remember the 300 dpi rule for press work). Upsampling in Photoshop is done from the same Image Size dialog and works in exactly the same way as downsampling - mapping grid co-ordinates and then determining pixel values - but because new pixels are being added the results are very different.

With Nearest Neighbour interpolation the effect of jaggies becomes even more pronounced when upsampling and, with larger size increases where each input pixel ends up as multiple output pixels, clear blockiness can result - the dreaded Pixelation. Bilinear interpolation comes into its own to prevent this but leads to a new problem. Because Bilinear interpolation works by averaging each of the surrounding four pixels there is a clear smoothing effect and a loss of edge definition and distinguishing detail - exactly the factors that our eyes are programmed to look for in an image. It is this softness and lack of life, along with the rarer pixelation, that are the sure signs that an image is being printed at a size beyond its pixel-based limits.

The sure signs of upsampling are pixilation and/or softness.

So is there a better alternative? The third and final interpolation method that Photoshop offers is called Bicubic and it works by determining a weighted average of the 4 x 4 array of surrounding pixels. Because it is averaging 16 pixels' values, many users assume that it produces the softest results. In fact this isn't the case. The extra pixels and processing time are involved in the calculation to try and work out more precisely how the colour is changing in the area surrounding the new pixel so that a more accurate sample is produced. It's rather like the difference between joining up the points on a curve as straight lines or as a best-fit curve.

Bicubic processing is the most advanced and most accurate interpolation method that Photoshop provides (make sure it's the default setting under Preferences>General) and it's also the system that the vast majority of printer drivers use. It certainly manages to keep more accurate tonal variation and detail than Bilinear processing but still the more you increase the size of your image - and definitely try to keep enlargements to 150% and below - the more the image softens especially around those crucial edges. So is there anything else that can be done to improve an enlarged image's quality and so to increase the limits within which it's possible to effectively resize?

The most obvious solution for a soft image is to sharpen it. Photoshop's fixed Sharpen and Sharpen More filters are relatively crude and work by increasing the tonal difference between adjacent pixels. The Unsharp Mask filter (USM) is a much more useful option as it allows you to limit the effect to only those pixels that differ from surrounding pixels by a Threshold that you can set. USM also allows you to set a customizable sharpening Amount (don't go beyond 200%) and also to set the width of the band of pixels involved in the sharpening effect which is useful for high-resolution print work (still don't go beyond a Radius of 2). USM is undoubtedly a useful tool, but it depends on existing pixel value variation which interpolation by its nature smooths over and so ultimately it's a losing battle.

Any serious quality improvements will have to come at the interpolation stage and there's one very worthwhile trick to know about. More detail can be kept in an image if you don't apply large bicubic enlargements at once but rather in small 10% steps. Obviously this so-called Stair Interpolation seriously increases processing time, but the system can be scripted (visit http://www.fredmiranda.com/SI for presupplied actions) and can also take advantage of Photoshop's 16-bit mode to provide greater room for tonal manoeuvre and so ultimate accuracy.

Stair Interpolation offers the best native Photoshop enlargement.

The benefits of Stair Interpolation are welcome but to improve the quality of bitmap enlargement further it's necessary to look beyond Photoshop. There are a number of image utilities which offer different interpolation methods such as Qimage (http://www.ddisoftware.com/qimage) which offers no less than seven. While some of these seem no better than bicubic processing, the Lanczos method does cut down on aliasing and so manages to keep more edge detail. Of these third-party utilities I particularly like S-Spline (http://www.shortcut.nl) which is dedicated to the single task of enlarging bitmap images and lets you quickly compare the big three interpolation methods against its own. And for high-frequency image types, such as screenshots and cartoons, its S-Spline interpolation method really can make a huge difference.

S-Spline interpolation is good for maintaining high contrast features.

This is the limit as far as interpolation goes - but there are other technologies that promise to go further, even to offer resolution-independence for bitmaps just as there is for vectors. What makes these solutions different is that they first save the bitmap to their own proprietary format. The VFZ file format from Celartem Technology (http://www.celartem.com) for example "converts each RGB color channel from the original image to a 3D vector format. It expresses color distribution information as a distribution component, and color change information as a vector component."

VFZ files promises superior image enlargement based on a vectorised bitmap format.

If you open a sample VFZ file in the proprietary viewer you can view it at six different quality levels. At the lowest level you can see that the effect is rather like a crudely traced vectorized drawing, but at the highest level the image is indistinguishable from the original bitmap. According to Celartem this underlying vectorisation of the image means that it's possible to enlarge images "up to 1200% with no change to original colour shading and no pixilation effect". It sounds promising - especially as VFZ files are lossless and considerably smaller than the original bitmap - but sadly in practice its enlargement results seem soft and no better than bicubic processing.

Ultimately I'm not convinced that vectors are well suited to storing bitmap information where each pixel can vary arbitrarily from its neighbours. So is that the end of the road? Not quite. There is another advanced file format that promises more than the standard grid-based bitmap. LizardTech http://www.lizardtech.com doesn't give a lot away about how its "Genuine Fractals" system works apart from to say that based on "proprietary mathematical algorithms" it "replaces the pixels of the image with a new mathematically based file format". Reading between the lines, the fact that fractals are patterns that repeat on smaller and larger scales might give some indication of what's going on.

In practice, the Genuine Fractal system has an immediate advantage as it fits in well with professional imaging workflows by acting as a Photoshop plug-in. Once installed, a new option becomes available from the Save dialog that lets you save any open bitmap (though only single layered) to the highly compressed STN file format. To enlarge the image you simply open your STN file and a dialog appears in which you can set the desired pixel dimensions or an enlargement percentage. Click on OK and after some considerable processing your enlarged image appears.

Genuine Fractals technology manages to maintain edge detail that bicubic processing loses.

The obvious question is does it work? And the simple answer is yes - amazingly well. For enlargements of 200% or more where bicubic interpolation produces a soft mush, the Genuine Fractals image maintains a far greater degree of edge detail and so life in the image - a fact that is often even more apparent in print than onscreen. In particular, whereas low contrast continuous tone features like sky, clouds and skin benefit little, stronger features such as eyes and text maintain much more of their distinguishing edges and detail.

So is this the promised holy grail of resolution-independence for bitmaps? Sadly that is going too far. To begin with the enlargements produced with Genuine Fractals aren't perfect with some JPEG-style artifacting and ghosting around edges (this also means that other approaches such as Stair and S-Spline interpolation sometimes produce better results especially for high frequency images). More to the point, it's crucial to realize that Genuine Fractals can't produce magic. If it wasn't there in the original scan or digital photo, it's not going to appear when you enlarge it. You can't add detail that isn't already there.

All the resampling strategies that we have looked at are exercises in damage limitation. Some are surprisingly powerful and so excellent additions to your toolkit, but ultimately the best policy is to make sure that you have the right number of pixels from the start - and to avoid resampling altogether.

Tom Arah

May 2002



Hopefully you've found the information you were looking for. For further information please click here.

For free trials and special offers please click the following recommended links:

For further information on the following design applications and subjects please click on the links below:

[3D], [3ds max], [Adobe], [Acrobat], [Cinema 4D], [Corel], [CorelDRAW], [Creative Suite], [Digital Image], [Dreamweaver], [Director], [Fireworks], [Flash], [FreeHand], [FrameMaker], [FrontPage], [GoLive], [Graphic Design], [HTML/CSS], [Illustrator], [InDesign], [Macromedia], [Macromedia Studio], [Microsoft], [NetObjects Fusion], [PageMaker], [Paint Shop Pro], [Painter], [Photo Editing], [PhotoImpact], [Photoshop], [Photoshop Elements], [Publisher], [QuarkXPress], [Web Design]

To continue your search on the designer-info.com site and beyond please use the Google and Amazon search boxes below:

Google
Web designer-info.com

       
designer-info.com: independent, informed, intelligent, incisive, in-depth...
 


All the work on the site (over 250 reviews, over 100 articles and tutorials) has been written by me, Tom Arah It's also me who maintains the site, answers your emails etc. The site is very popular and from your feedback I know it's a useful resource - but it takes a lot to keep it up.

You can help keep the site running, independent and free by Bookmarking the site (if you don't you might never find it again), telling others about it and by coming back (new content is added every month). Even better you can make a donation eg $5 the typical cost of just one issue of a print magazine or buy anything via Amazon.com or Amazon.co.uk (now or next time you feel like shopping) using these links or the designer-info.com shop - it's a great way of quickly finding the best buys, it costs you nothing and I gain a small but much-appreciated commission.

Thanks very much, Tom Arah


 
[DTP/Publishing] [Vector Drawing] [Bitmap/Photo] [Web] [3D]
[Articles/Tutorials]
[Reviews/Archive] [Shop]  [Home/What's New]

Copyright 1995-2005, Tom Arah, Designer-Info.com. Please get in contact to let me know what you think about a particular piece or the site in general.