… but You’re Not!
You’ve done everything right in preparation for your photo shoot. Your scene is set nicely. You’ve got the perfect lighting. You’ve correctly set your white balance. Click. And what do you get for your troubles? Purple that looks blue? Red so bright it hurts your eyes? Background sky the wrong color? Frustrating, isn’t it? Well, sometimes no matter what you do, or how good your technique is, it comes down to this:
It’s not you, it’s your stupid camera!
Feel better? Good. To be honest, digital cameras are truly remarkable machines. And you can get a pretty darn good camera for not too much money. But despite all the progress that has been made in the development of digital imaging technology, cameras can’t do some things nearly as well as our own eyes and brains. And one of these things is handling certain colors. Why? Well, for one, digital camera sensors are color blind. Really, they are. They only see in black and white, and rely on a complicated set of filters and processing algorithms (fancy word for a step-by-step procedure) to interpret color. Want to learn all about how a digital camera works? That’s not what this post is about, but if you’re interested, there’s great information here and here.
So what is this post about? Well, it’s about waves, the visible spectrum, purple, infrared filters and most importantly, why any of this is important to your photography.
Let’s get started, then, with a little Physics lesson on visible light. Every day we’re surrounded by all kinds of electromagnetic waves. They’re everywhere – when we listen to the radio, reheat leftovers in the microwave oven, change the TV channel with a remote, get a sunburn or visit the dentist or doctor for an X-ray. Take a look below (click to zoom) to see what I’m talking about, and just for fun take a look at the lengths of these waves (wavelengths) compared to everyday objects that you might be familiar with.
Do you see that tiny little “Visible Light” segment? Out of all those different types of waves, these are the only waves that our eyes respond to. Here’s a closer look at that segment:
The area between the two lines (between infrared and ultraviolet) is the visible part of the spectrum. Is there anyone out there who doesn’t appreciate a good rainbow? That’s the visible spectrum.
How about a Pink Floyd classic? Yep, visible spectrum again!
Now, if we take that visible light diagram and bend and stretch it just so (as some really smart folks did about 80 years ago), we get a horseshoe shaped diagram that looks like this:
It’s called the “CIE 1931 color space chromaticity diagram.” No, that’s not important. Here’s what is important:
- Every color along the bold line around the perimeter of the horseshoe can be represented by a single wavelength (those numbers in blue). Simple.
- All the colors contained within the horseshoe area can be produced by various combinations of those single-wavelength colors. Want to try something interesting? Print this diagram (in color, please), and get yourself a pen and a ruler. Now draw a straight line between the 500 mark and the 600 mark. Next draw another straight line between the 480 mark and the 560 mark. See that point where those two lines intersect? That color, whatever you want to call it, can be made by combining either the 500-600 wavelengths or the 480-560 wavelengths. How many straight lines can you draw through that point? Well, that’s how many different ways there are to make the same color. Oh boy, maybe these cameras aren’t so stupid after all!
- There is a diagonal line (not bold) starting near the bottom-left (violet) and heading up and right to the red region. This is called the “line of purples.” On this line, no color can be represented by a single wavelength (like every other color around the horseshoe). And every color on this line can only be produced by mixing a unique ratio of extreme red (the longest waves) and extreme violet (the shortest waves). Uh oh! Just a little more red or a little more violet makes a big difference in color.
Now, I don’t know if you read or didn’t read the two links above about how digital cameras work. But in summary, accurate color reproduction comes down to filtering and computing. And believe it or not, the computing is the easy part (at least the “easy to fix with a firmware update” part). So that leaves filtering. On most cameras, light from the subject first passes through an IR (infrared) filter. Then, whatever can make it through an array of red, green and blue filters finds its way to the actual photosensitive part of the sensor. Those filters are basically all that is used to turn what we see with our eyes into a signal that the colorblind sensor can detect, which can then be reprocessed into a color image that hopefully looks something like the original. What could possibly go wrong?
Well, remember the horseshoe-shaped CIE diagram and all the possible ways to create different colors? An remember the line of purples, where we’re counting on precise amounts of red and violet to create a certain color? The camera is trying to do all this with only three different colors of filters. Wow! For the most part, it works quite well. But that IR filter has a big job. It has to block out all those waves that are just a little bit longer than the red waves. We can’t see them, but the sensor on the camera can, so they need to be filtered. The problem is determining just how much filtering is needed. Different cameras manufacturers all have a different idea about exactly how these filters should be designed. So in camera “A,” a little infrared light might sneak past the filter, while in camera “B,” visible red might be filtered too much. The result is two completely different images. Another problem is slight differences in the RGB filtering. Any little change from one sensor design to the next can have a huge effect. And that, in summary, is why you’re smarter than your camera!
Our eyes handle color a little differently. Do you remember learning somewhere along the line about “rods” and “cones” in your eyes? Or do you remember the Seinfeld episode where Kramer says “Jerry, my rods and cones are all screwed up!” after looking at the red Chicken Shack sign a little too long? Well, somewhat like a camera, our eyes have receptors called cones that are responsive to red, green and blue. But, unlike a camera, our brain does a better job at processing the incoming information. Without realizing it, we actually compare information from each color “channel” with the combined response of the other two “channels.” That gives us more color information to work with, namely yellow and magenta. Presence or absence of information in any of these channels allows us (again, without realizing it) to both enhance the colors that do belong and suppress the colors that do not. Now that, folks, is truly remarkable!
So what does all that mean to you? Well, what it means is that you now have a legitimate excuse as to why, despite your best efforts, you still have some color issues. And, yes, there is something that you can do about it. It’s not an ideal solution, but it’s better than nothing. The answer lies in post-processing. Yes, that’s right, there are certain times when you just have to give up and say: “Fine, I’ll fix it in Photoshop” (or Gimp, or Picnik, or Picasa, etc.). If your software has a “selective color” option, it’s quite easy to do. If not, then try to keep things as simple as possible by shooting your problem colors against neutral backgrounds to make editing easier. Problem solved!
I rambled on a bit more than I had intended, but I hope that I was able to provide a little helpful insight and advice (or at least make you feel a bit better!).
Until next time … Happy Shooting!