Monthly Archives: December 2009

What’s wrong with using those colored glasses to see 3D?

Anaglyph. Some say it was developed in 1853 by Wilhelm Rollmann in Leipzig, Germany. Hardly cutting edge technology by today’s standards.  The development and use of anaglyph imagery ignores the fact that separating color (a form of data compression) is a bad idea for human vision given the physical properties of the human eye. The human eye has limited color vision capability as compared to luminance or brightness. The inside of the eye has receptors called cones and rods with the cones providing the eye’s sensitivity to color. And the cones are primarily concentrated in a tiny area in the macula called the fovea centralis  in an area about one one hundreth (1/100th) of an inch in diameter.  Pretty tiny huh? But wait, there are three different types of color receptors with response curves weighted in relation to the primary colors red, green and blue. So the real estate in the eye for any given color is perhaps LESS THAN four one thousandths (4/1000ths) of an inch or about the thickness of a human hair – or a speck of dust.  So where you might think there were billions of these cones in the eye to see color, you’d be wrong.  For seeing the color blue for example, there are only about 140,000 blue sensitive cones.  Given that there are roughly 8 million shades of blue possible in a 24 bit image typical for a computer monitor you can see how that might be limiting.

So, here is Wilhelm in the ninteenth century with an idea that using filtered blue color in one eye and filtered red color in the other eye is a good idea for showing a 3D image. And here we are in the twenty-first century STILL USING IT!  It was a bad idea then, and it is a bad idea now.

But there is more. The brain now has to take a limited compressed blue color signal from one eye and  a limited compressed red color signal from the other eye and merge it into a binocular image and make sense of these weird non-natural signals from each eye to somehow reconcile to a natural color image. Like that’s going to happen? NO it isn’t going to happen! That’s why the color looks so muted and unnatural. IT IS unnatural! It is IMPOSSIBLE for it to BE natural.

With so much missing information it is not hard to understand that the nuance of color shift as presented from different perspectives in each eye is obliterated. This most definitely has an impact on the perception of 3D and it is no surprise that people have used every trick in the book to emphasize 3Dness as a way to compensate for all that is lacking with anaglyph.

Color, shading, texture, highlights, motion and parallax are all intertwined with regards to the information they convey to us. These properties together are essential for the formation of a natural binocular image that is merged within our brains.

To a lesser extent, polarized glasses also diminish the “realness” of 3D. But they do indeed have a negative effect in addition to the cumbersom dumb glasses you have to wear.

The solution is no glasses or glasses that have negligible color loss and minimal distortion. I’m not sure what those glasses would be. Perhaps some kind of optical correction for parallel or cross view images or shutter glasses with super fast refresh. But the preferred truest method is autostereoscopic or glasses free 3D.  The WORST is anaglyph.

This is the 21st century and it is time to demand BETTER technology for multi dimensional imaging. And it is time to demand BETTER multi dimensional imaging.



Filed under 3D Photography, 3D Video Monitors

Why use more than 2 perspectives for a 3D image?

Every time I go out with my camera rig I get asked the same question: “why do you have so many cameras?”.

The answer is actually somewhat complicated. A 3D motion picture is created with two cameras and each camera’s image is presented discretely to each eye by the use of polarized filters or in the case of home 3D capable TV’s they would be shutter glasses in many cases.

If you want glasses-free 3D viewing then the ability to discretely show a single image to each eye is more difficult. And if you can move while looking at the photograph then something weird happens. Try this at your next 3D movie! Get up and walk around while looking at the 3D movie. The images follow you around because your perspective in space is fixed. It is a very weird sensation.

If you aren’t fixed to a chair and are walking around then you have a constantly changing perspective in real life. Holograms provide a pure seamless array of perspectives. A fantastic feature but as I have mentioned elsewhere on this site, holograms are very limited in terms of the pictures you can take and do not have real world colors or any color accuracy at this point in development.  A lenticular lens array approach makes it possible to direct different images to each eye (that’s what the lens on the print overlay does). The smoothness of transition from one perspective to the next is a direct function of the number of cameras you have. More cameras provide a smoother transition from perspective to perspective. As you look at a lenticular and move, you see a change in perspective that can compare to what you see in real life. (In my photographs, I take great pains to make sure that is the case – but practically everyone else does not consider this important). If you are going for an effect then I suppose it isn’t important. But if you are trying to emulate reality then it is extremely important.

Providing the experience of multiple perspectives greatly enhances the realism of a 3D image and why I believe the 2 perspective approach used at the movies will eventually go away as it is more limiting.

Want to know more? Comment with questions and I’ll be glad to tell you what I have found out and experienced through trial and error.


Filed under 3D Photography

Don’t you dare call lenticular 3D composited shots a gimmick!

Yikes! I posted a recent comment on LinkedIn as a response to someone in the lenticular industry that really got their nose out of joint when I suggested that photoshop image cutouts with displacement and aftereffects composites and step and shoot camera systems were a gimmick. He got so upset that he was compelled to strike back and comment that some early bird photographs that I took with one of my early camera system prototypes “didn’t have very much 3D and the color was not very good”.  Well, there I am standing defeated with that rhetorical comeback! ;^)

What is interesting is that the bird photographs had precise 3D that exactly matched what the naked eye was able to perceive. Now, maybe he was referring to the first proof that they did where they used a lenticular lens with a wide viewing angle (this greatly diminishes 3D because you see multiple image perspectives in both eyes).  But I rejected that proof (surprised that they would even suggest such a wide viewing angle lens). This company did do a second proof with a suitable narrow viewing angle lens overlay. It had the correct depth and did indeed match the naked eye view of the scene.  I wouldn’t be surprised if he felt that neither proof had “very much 3D”. This is because the holy grail for lenticular printing companies is in your face depth. It isn’t about recreating reality or producing something natural with nuanced depth as is perceived in the real world.

Now I referred to this as a gimmick because to me, having pictures with elements that have artificial depth in order to “be impressive” is what I define as a gimmick.  Indeed, in the dictionary one definition is: “something designed to attract extra attention or interest” and that is what I meant.

This IN YOUR FACE IS BETTER idea of lenticular is pervasive. My goal and humble opinion is that images that reflect what we see in the real world can have more meaning because our brains can interpret them with the context that our brains have from looking at the real world. When I evaluate one of my photographs I ask myself: “Does this look like what I saw with my naked eyes?” I don’t think about whether or not it has enough 3D effect. I think about whether or not it depicts the scene accurately and naturally.

My feeling is that before we embark on embellishing 3D we should first understand how to use it to produce accurate true to life imagery. Once we have that mastered, then of course as an artistic expression all things are possible just as they are in real photography where different films are used and different Photoshop filters and effects are applied to the image.

It is like learning the acoustic guitar and directly hearing the vibration of the strings and understanding those relationships before taking up the electric guitar with fuzz box effects and stacks of Marshall amplifiers.

So, I don’t have much of an axe to grind with regards to this fellow. When I received my print run from his company it was not representative of the proof that was provided and they agreed to take the whole print run back.  They told me I was too picky because I felt the colors were muted and muddy. They told me that I should not have expected the prints to look like the proof. Interesting that this fellow commented that “the color was not very good”. That was my feeling exactly and fortunately I was able to achieve color that I found more asthetically pleasing (but still not perfect) from another company.  These are the facts and they should not be construed as in any way negative. Indeed, they took the print run back and acknowledged that the print did not look like the proof.  These are the facts. Also, a fact is that I think compared to the work I am doing now that the bird photographs are not good. They were taken with consumer grade camera optics.  I am selling them for $12.95 and I think that represents their true value as compared to what I am producing now.

I have stopped posting the above mentioned thread on linkedIn because I’m not going to change this guy’s mind and he will keep commenting my posts forever.  I don’t agree with his mindset and I’m sure that’s fine with him because he doesn’t agree with mine.


Filed under 3D Photography

Importance of Ebbinghaus Illusion to understand 3D perception idiosyncracies

One of the reasons we can easily process a range of interocular spacing is that our perception of depth and size is derived from many cues. There are many examples of objects looking smaller or larger depending upon the context of the image. No better example of this is found than the Ebbinghaus illusion named for its discoverer, the German psychologist Hermann Ebbinghaus (1850-1909).  In the example below the two orange circles are exactly the same size but the one on the left appears smaller. Why? Well, scientists are debating the reasons and they have to do with how the brain works and the psychological reference the images evokes.

Ebbinghaus Illusion in 2D

When we convert the illusion above and add 3D depth to the orange circles the illusion gets even more interesting.  What do you see now? ;^) To see in 3D use the crossview method I described in the previous post by crossing your eyes until the tiny red dots align.

When composing a shot, a whole new mindset is required for 3D. What is emphasized or not emphasized and what is made to look “more” real is in many cases not intuitive. One size does not fit all and EVERYTHING interacts. Distance to object, focal length of lens, interocular spacing, color and shape, size of the printed image and distance of the eyes to the printed image.

With 3D you see more than you see.

Is it “rocket-surgery”? You bet! Once you understand that 3D isn’t just a filter click in Photoshop you are well on your way to freeing your mind on what might be possible artistically with 3D imagery. To engage the brain at much deeper levels of processing to add heretofore unimagined viewing  captivation, engrossment and fascination. Like the layers of an onion to peel back, 3D photography reveals more and more the deeper we look into it.

Leave a comment

Filed under 3D Photography

Effect of lens spacing on 3D depth (example)

What difference does it make how far apart the camera lenses are and what does it look like? Well, as camera lenses are spaced farther apart it has the same effect as growing a bigger head. In the link below, you can see for yourself in 3D how the background and foreground stretch as the spacing between camera lenses increase and decrease. This link shows a cross view stereo pair. (about the only way to show 3D on a 2D computer monitor with any quality – and it is tricky to learn how to do)

To see it in 3D the trick is that you have to cross your eyes until the two red dots are merged (you perceive 3 images and look at the middle one). Lots of ways to learn how to do this, easiest is to hold your finger in front of you, look at your finger tip and move it towards the monitor and back towards your face. You should be able to see the red dots in your peripheral vision as you focus your eyes on your finger tip. The red dots will come closer together as you move your finger either towards your nose or towards the computer monitor. Once the red dots are aligned in your peripheral vision, then focus your eyes on the red dot and your brain should merge the two images into a single 3D image. As the image animates you see the effect of your head getting bigger and smaller. If you aren’t used to viewing cross view images it might take you a while to master it.  Don’t sit too close to the monitor or else you will be crossing your eyes a bit more than you would naturally want. For easiest viewing sit away from your monitor about twice as far as you normally would.

I’d like your comment on if you were able to see the above in 3D and what you noticed.


Filed under 3D Photography, 3D Video Monitors

The case for 3D photography (why is it an important art form).

So a photograph is in 3D or a movie is in 3D… what’s the big deal? Why does 3D matter? Why does 3D keep cropping up and why all the hoopla all of the sudden? Isn’t 3D just a gimmick to show a pole or something sticking out of the screen? What does it have to do with the movie?

These are all important questions to answer. Certainly 3D has been used as a gimmick to get “ewws” and “ahhs” from a movie audience as poles stick out of the screen or the “tunnel” shot comes into view or the flying around scene, or the… well, you get the idea. It seems that Hollywood is a bit stuck on the gimmick shot list. But it isn’t just Hollywood. Lenticular posters to date have been produced to have the most dramatic effects possible with cutout layers shifted as much as possible to make thing JUMP and art directors happy with the “effect”.

But the case for 3D isn’t about gimmicks or “in your face” drama. Sure, that can be fun, but one can argue that 3D certainly isn’t essential to have for a movie or as an added enhancement to a photograph based upon what they have seen thus far.  And I’d say that was true to the extent that 3D is being used only as a gimmick and not woven into the fabric of the movie or photographic image. Once our thinking changes and we look at 3D as integral to the depiction of the imagery that we are presenting, then everything changes.  For movies, it is the same as when sound was introduced and when color was introduced.

Binocular vision is more than merely depth perception. As light radiates off of objects, the light goes in all directions. Slight changes in viewpoints make a difference in terms of how things are perceived. Texture, hardness, softness, weight, mass, material properties and fragility are just a few characteristics that binocular vision helps humans to determine. In many cases some or all of these properties are not easily discernable with 2D imagery. Little research has been done regarding the brain’s evaluatative properties as it compares and processes the separate viewpoints from each eye. But we do know that this processing occurs and lights up areas of the brain that don’t light up when a person views 2D imagery.

3D imagery engages binocular vision brain processing but unfortunately, if the imagery is not accurate it can cause all sorts of undesirable things; including eyestrain. I believe this is part of the reason why 3D comes and goes. Bad 3D is not enjoyable and is the antithesis of compelling. Gimmick 3D is briefly entertaining but not integrated into normal world perception. Like a rollercoaster, it is fun to enjoy once in a while. When care is taken to accurately image a scene in 3D – it is quite another matter altogether. Binocular vision of 3D imagery has the potential to transport your perception of reality from where you are to a place within the 3D imagery. It becomes possible to experience  the imagery in the same way you experience the real world. With 3D still photography, a moment in time can be examined in detail and experienced in a way to greatly enhance memories and even relive the moment. With 3D motion imagery the potential exists to experience real (or unreal as in the case of Avatar) world imagery.

As artists perfect the craft of 3D stereoscopic and autostereoscopic imagery, as they did with sound and color, expect dramatic new experiences that have never been possible before. Rich, immersive experiences that grow our understanding and perceptions. Storytelling that engages us in new ways providing real context and experience.

Trust me, it is exciting to see the transformation from watching a movie or looking at a photo. Now we can exist within the movie or photograph as it can be perceived as reality is perceived. “Hold onto your hat Dorothy, I don’t think you are in Kansas anymore.” Unfortunately, you have to see it in person to believe it. This is going to be a big problem with the internet! I have absolutely NO way to show you one of my photographs. For me, the internet is useless except to tell you about autostereoscopic life size high dynamic range 3D imagery.


Filed under 3D Photography

What the heck does 120Hz or 240Hz have to do with 3D or “quality”?

A lot and not much. ;^) How’s that for an answer? The human eye is a very complex vision system that is integrated with an even more sophisticated processing system (the brain).  There is a lot going on when we see stuff. One piece of the puzzle is how we perceive motion. Some suggest that humans have a persistance of vision (how many images we see each second) of about 70 or 80. For a long time, televisions updated 1/2 of the picture at 60 images per second and it was thought that this was fast enough to eliminate flicker. Motion Pictures have 24 images per second but each image is shown 3 times so the eye is presented with a “refreshed” image 72 times per second. When coupled with camera shutter speeds of 1/60th of a second or slower the motion blur of the source image with a 60 or 72Hz refresh rate was deemed sufficient to provide fluid motion. Nice of “them” to deem that huh? Hey, why would we ever need a shutter speed faster than a 60th of a second?

Oops, this entertainment thing called sports made people complain that the baseball was blurry. Couldn’t that be fixed? Sure, use a faster shutter speed and the ball comes into sharp focus. But wait a minute. When you watch it the ball stutters across the screen. Well, that’s because we process motion seemlessly with our eyes and brain. We don’t see things in 1/30th of a second 1/2 resolution flashes or 1/24th of a second images flashed three times. *

So, there was now a need to increase the number of images per second. And the scientists are trying out all kinds of stuff. One process interpolates inbetween images with a computer to to “simulate” a more seamless presentation at faster refresh rates. Some of the 120Hz televisions work that way. And they sort of work. Unfortunatlely, when we deviate from reality everything becomes subjective as to what is good enough. And we are so far removed from reality with digital video lossy compression and colorspace lossy compression and interpolated spatial and now temporal image information that you can argue yourself into a zillion directions.

Now we add 3D to the mix. This process cuts the refresh rate in half. So that 120 Hz that was so critical for sports is now back to 60Hz for each eye. And the baseball is blurry again or it stutters if a fast shutter speed is used. And what are the manufacturers touting as super cool 3D? You guessed it: Sports!

What we have now are inconsistent continuously variable levels of quality coming at us over fibre optic or satellite or cable. And now we are going to add 3D? The problem is that there is no way that the 3D will match the fluidity and pleasantness of 2D on the same television. That is unless they cripple the 2D performance of the TV so that it doesn’t exceed the 3D performance. Sure, like that’s going to happen.

Consumers are going to get very confused and it is one step forward for 3D and two steps backwards. And remember I said at the start of this that the perception of motion is only one piece of the puzzle? There are many other issues.

However, I am not pessimistic. I think the future is pretty good for 3D because a lot of people are working very hard to improve the technology. Many things will look absolutely fantastic on 3D televisions and at motion pictures. But many things will also look like total crap. We will just have to wade through the mess for a few years until things mature and incrementally improve.

As to 120Hz, in general it is much better than say 60Hz and 240Hz has the potential to be better than 120Hz. But there is a lot of funky marketing stuff going on and as I said before, it is only a piece of the total puzzle.

*It is not known what the limit is to the processing of images by human beings because it depends on things like after image retention and the variability of the transmission of signals from the cones in the retina to the brain. As it turns out they all don’t work at the same exact speed. Our vision system and indeed the whole human body is an analog processing system that works differently as compared to an all digital processing system where there is always either an on or an off condition. Case in point? A military test showed that a pilot was able to identify accurately the type of aircraft that was flashed on a screen for 1/220th of a second (source

Leave a comment

Filed under 3D Video Monitors