Category Archives: binocular disparity

I am presenting a paper at SPIE January 25, 2011 at 5:30 PM Paper 7863-49


SPIE (the International Society for Optical Engineering)  See:
http://spie.org/x16218.xml is holding a conference on 3D imaging from Jan. 23 – 27 in San Francisco, CA. My paper and presentation: “Human perception considerations for 3D content creation” is about the problem of perception conflicts as they relate to 3D imagery and what to do about them.

I first started thinking about this when I saw an old lenticular photograph of Queen Elizabeth. The photograph could be viewed with stereopsis but the Queen looked like she was dead. Watching the movie Beowulf, while not in 3D, also gave me the creeps as the characters had a dead aspect to them. I noticed some 3D lenticular photographs of people presented with a doll-like character. I then started to notice things in 3D movies that didn’t seem right. When details disappeared into blackness or got blown out to white I noticed an uneasy feeling while looking at that part of the 3D presentation.

Indeed, every time something was presented in 3D that was atypical or not possible to see in the real world, I could detect a feeling of conflict present at some level in my subconsious and I started to manifest a sensitivity to it with regards to recognizing when it was happening.

All of these observations got me thinking about the various mechanisims that we use to see and interpret depth, space and texture. Certainly vergence is the primary mechanism, but as I became more aware of supporting clues like accommodation, motion, luminance dynamic range, binocular rivalry, field of view and so on, I came to a realization.  I realized that when non-vergence depth clues weren’t complementary that those elements or perceptions in conflict required suppression to continue viewing without some sort of physical effect occurring (typically unpleasant such as headache, nausea, etc.).

My paper is a start to the investigation of the importance of supporting perception cues as it relates to stereovision.

*Vergence is the simultaneous movement of both eyes in opposite directions to obtain fixation and the ability to see depth.

*Accommodation is the automatic adjustment in the focal length of the lens of the eye to permit retinal focus of images of objects at varying distances. It is achieved through the action of the ciliary muscles that change the shape of the lens of the eye.

Advertisements

Leave a comment

Filed under 3D, 3D HDR, 3D Health Issues, 3D Motion Picture, 3D Photography, autostereoscopic, binocular disparity, binocular rivalry, HDR, High Dynamic Range, Perception Conflicts, S3D, stereopsis, stereovision

3D Movies & 3D Photos Have Barely Scratched the Surface of Their Potential to Show Us Amazing Things


I believe that very few people fully appreciate all that seeing a fused image from two eyes (stereovision) makes possible. A fused image provides much more than merely depth cues.

For example, today was a sunny day and I took the family to a hill for sledding fun. As I looked at the snow I became mesmerized with all of the information I could glean looking at it. The specular highlights reflecting off of the snow provided amazing information about the density and texture of the snow. Without touching the snow I could see that it was of the right consistency to make a perfect snowball.  It wasn’t powdery. It wasn’t hard. It had a granular look that told me it could be easily manipulated. In some areas, the snow had a hard covering which was also evident from the way light reflected off of the surface.

Upon closer examination, holding one hand in front of my left eye and then switching to cover the right eye I could see that many of the pinpoints of light reflecting off of the snow that were visible in my left eye were completely invisible in my right eye and vice versa. This is an example of binocular disparity and possibly binocular rivalry since points of light were mutually exclusive to each eye.  But wait, as I looked at the snow and the points of light, I didn’t notice anything other than it looked three dimensional and “normal”.  But how could it with such gross disparity between views?

Evidently, my brain was doing what it does all of the time: fusing the views together into a single image that make it possible for me to see all of the subtleties of the snow.

I suspected that the sparkle color disparity in each eye also told me something about the purity of the snow. So I looked at areas of snow that were untouched and areas that had been splashed by the road and road salt. Telling the difference was trivial. I could spot the cleaner snow in an instant even though both examples were white and would look no different in a regular photograph.  My analysis continued and I discovered I could see how the top of the snow was at the melting point in some areas and clearly not at the melting point in other areas. This provided a clue as to the depth of the snow. Upon investigation that was definitely the case. I could even tell where snow had fallen from a tree and packed together the snow grains in a different pattern to the rest of the snow. I could see how the snow had been disturbed and even was able to estimate from which branch the snow had fallen based upon the reflection of the snow’s surface. The more I looked, the more information I discovered. I wonder if the sparkles of light were being interpreted in my brain just as satellites that image dust in space can tell the mineral and gas components of the dust by evaluating the wavelengths of light. One eye was receiving one set of the sun’s reflective data and my brain was comparing it to the different reflective data from the other eye. That had to explain it, because when I blocked one of my eyes the snow detail seemed to diminish and take on a more generic look.

Also, as I began to understand what I was able to see I started to “see” more. And this, to me,  is an amazing property of stereovision. The more you look, the more you see. Because you can compare the perspectives and the subtleties are very telling. 

As we moved from the sledding area to around the fire I looked down at the grass and was surprised to instantly detect a tiny baby grasshopper resting motionless on a blade of grass of identical color to the grasshopper. It should have been invisible but my stereovision detected a bump on the blade of grass that didn’t look quite right. It was my depth perception that made the grasshopper visible. Or was it? I noticed that the blade of grass the grasshopper was on moved in the wind differently than the other blades of grass and that was what I initially noticed. Not the grasshopper. It was the disparity and how it was exaggerated by stereovision that really gave me the clue. It was a combination of visual perceptive components that made the grasshopper visible.

All of the above convinces me that seeing with two eyes, as a single fused image, may be more complex than most of us imagine. The notion that the only important component is parallax might be misguided. I say this in reference partially to the many companies offering 2D to 3D conversion services. The only thing they convert is a parallax shift of single perspective information and apply a morph at the transition points. When so much information is left out, I wonder if it will dull our senses? I wonder if we can get used to not seeing the way we normally see. And I wonder if this is a good thing.

I’m guessing that it isn’t a good thing and I am continuing the quest to capture in a photograph all of the information that is provided with multiple perspectives.

2 Comments

Filed under 3D Photography, 3D snow, autostereoscopic, binocular disparity, binocular rivalry, S3D, specular highlights, stereopsis, strabismus, vision therapy