We’ve talked before about JPEGs and how well their compression algorithm works on photographs. Today though, I learned something rather interesting: I learned about a technique for using zero quality JPEGs.
When saving a JPEG image, you’re usually given the option to set the “quality” of the compression. The higher the quality, the better the image looks. The lower the quality, the more pixelated the image looks, but the higher the compression. (As an example, when saving a large image on my drive, the “Maximum – 12” quality setting made a 1.83MB file, while the 0 quality file was 132KB. The 0 quality image looked awful, though.) Use your own discretion on how much detail you’re willing to sacrifice for performance. I typically save images at high or medium quality, but low quality is admittedly better for web performance since it creates a smaller file size.
Since I’m all about quality, you can imagine my shock at seeing 0 quality JPEGs being suggested for better performance. “A zero quality JPEG? That must look horrendous!” I exclaimed, polishing my monocle and huffing like the pseudo-intellectual snob that I am, “Oh how dreadful. Why would anyone in their right minds sacrifice that much quality for the sake of performance?”
Well, the short answer to that question is “They’re not. They’re just being very clever about it.”
As it turns out, a small image at high quality is usually larger in file size than a large image at low quality. Some creative web designers have taken advantage of this, and are serving large, low quality JPEGs that are shrunk down to a smaller size in the browser. Amazingly, it can sometimes be difficult to tell the difference between a smaller high quality image and a scaled down low quality image, even when they’re right next to one another.
480×360 px, 8 quality, progressive scan: 85 kB
1024×768 px displayed at 480×360 px, 0 quality, progressive scan. 69 kB While there are differences to the high quality image, it’s pretty hard to tell that this is a 0 quality JPEG.
Oh, differences exist, sure. But this technique actually looks good enough for most displays, and if “good enough” is worth the smaller file size to you then I recommend this technique.
Like with everything, this has its limitations. Low quality JPEGs will often pixelate on areas of flat color and areas with sharp lines. Make sure you use this technique on photographs that don’t have this quality. (So flat surfaces and images with text benefit the least from this, and will come out blurry and terrible-looking.) It’s also possible for the larger sized image to eat slightly more processing power on the client’s end, since it has more pixels to push.
480×360 px, 8 quality: 56 kB
1024×768 px displayed at 480×360 px, 0 quality: 43 kB
The compression is really noticeable on the tree bark, the cloudy sky, and the left-hand building, so this would be a bad image to use this technique on.
This raises a few questions: What’s the best size to scale to? Is scaling to 300% the original size better than 200%? At what point does this scaling start to fold back on itself and the low quality sizes start to become larger than the original image? There’s an excellent series of experiments into these questions here, but I’ve decided I’d like to do my own, if for no better reason than this whole thing fascinates me to no end. So without further ado…
Credit: www.xkcd.com
The goal of this experiment was to get a ballpark estimate of the optimal dimension increase to produce an acceptable ratio of file compression to quality loss in using zero quality JPEG images. To find this is simple: Just save a whole bunch of zero quality JPEGs and compare their sizes and quality!
I took a couple of photos from my camera and shrunk them down to a baseline width of 480 pixels, saving them at the standard of 8 quality. From there, I incrementally increased the dimensions of the images, saving each iteration at 0 quality.
File sizes can be compared directly, but quality is more subjective in nature. I wrote a very small HTML script that simply pulled all the images for each test and displayed them at a specified width. I gave each image a ranking of ‘Poor’, ‘Fair’, ‘Good’, or ‘Best’ depending on how noticeable the reduction in quality was when placed next to the baseline image. None of the images I tested received a ‘Best’ ranking, but maybe I’m just really picky. For the sake of remaining objective, I’m leaving these rankings off of my results so you can judge the quality for yourself.
Here are the results of one of the tests:
Original image, 480×360, 8 quality: 50 kB
100% (Original size), 0 quality: 28 kB, 44% size reduction
150%, 0 quality: 32 kB, 36% size reduction
200%, 0 quality: 38 kB, 24% size reduction
250%, 0 quality: 44 kB, 12% size reduction
300%, 0 quality: 50 kB, 0% size reduction
I also tried saving at a baseline width of 240 pixels instead of 480, and saving at 2 quality instead of 0. The smaller image width had little impact on which dimensions created the best compression/quality ratio. Saving at a quality of 2 however made the sweet spot closer to 100% and gave much less compression overall.
The best compression/quality ratio I could find was between a 150% and 250% increase in dimensions. Images in this range retained an acceptable level of quality while shrinking in file size anywhere from 12% to almost 40%. Between 300% and 500% gave images that were often the same or larger in file size than the originals, so any higher than 300% is too high.
In conclusion: If you want to use this technique with your website, then the size you want to increase your images to is about double what they are supposed to be displayed at. This will retain an acceptable balance between quality loss and increased compression.
The use of 0 quality JPEGs is not something I ever thought I’d be recommending to anyone, but this technique clearly works, giving measurable performance gain with little loss of quality. I encourage you to try this out for yourself, and see if maybe your website could benefit from having a few 0 quality images.