Tag Archives: 3D motion pictures

Why Stereoscopic 3D Fails

Why doesn’t the world embrace the illusion of S3D?

It is perhaps the most compelling visual effect that exists for cinema. We have many examples of illusions that become giant successful adaptations. What is it about S3D that keeps holding it back and generates considerable negative press and reviews?

In my opinion there isn’t a singular simple explanation. That is probably the reason for varying degrees of failure in a world of elevator pitches and split second decision making.

Two views, while sufficient to create a compelling illusion, do not fully satisfy the confusing foray S3D makes into the blurring of referential imagery and what I call experiential imagery. We are used to seeing multiple perspectives and on being able to converge our eyes and focus on a specific point in space (unless, like this fellow you are crossing your eyes).

We are used to having multiple confirming points of reference substantiating what we are looking at to be what we think we are looking at.

Some of the many simple explanations are:

–when the reason for looking at something isn’t compelling enough to look at it, then it is going to fail to get attention

–many people do not have normal stereovision

–the ability to suppress perception conflicts is not uniform across the population

A more complex explanation:

–requires education in the field of neuroscience.  The following questions must be addressed:
–How does the brain fuse the input from two eyes into a singular image that depicts space?
–Do we perceive the space between things or do we truly experience it with our vision system?

Many scientists argue that our entire visual system is an illusion that the brain creates, and doesn’t represent reality in the way we think that it does.

The mistake of picking a single simple explanation in terms of success or failure is perhaps the reason S3D comes and goes. The saying: “if you build it they will come” does not always hold true.

So, why choose a career in 3D?

Given the above, why would I choose a career creating imagery that depicts the space between things? Because it is possible to create compelling images to view. It is possible to suppress perception conflicts. Many people DO have normal stereovision. Using science and knowledge of neuroscience it is possible to take advantage of the illusions of the brain.

But the main reason I go forward: art.

As a friend of mine is fond of saying: The Earth without art is simply “Eh”.



Filed under 1

3D – Present A Different Image To Each Eye – Simple, Right? Wrong!

Those of us with “normal” vision see the world with our two eyes in 3D all of the time. We see the space between things and perceive distance, size, texture, etc.

It’s different when you go to a 3D movie or look at one of my autostereoscopic photographs because you are looking at a flat surface and perceive 3D by way of an optical illusion. There are many subtle differences (and many not so subtle) between normal “seeing” and watching a 3D movie that are important to understand and consider. The biggest difference is that for normal viewing the 3D is “real”. You see an object in front of another object and you perceive the distance. When you reach out to touch the object, that perception is verified. A 3D movie on the other hand is created with an optical illusion. It is made possible because our eyes/brain has an amazing capability to decouple focus from convergence and see and perceive the illusion as if it were real.

Some of you are asking, “What does decouple focus from convergence mean?“.

For normal viewing our eyes focus and converge on the same point in space (the thing we are looking at). Just like separate camera lenses, each eye focuses on an object independently. Then our brain processes those two retinal images into a single image with depth. Because the focusing and merging is done separately, we can perform a trick whereby we fool the brain into processing retinal images that converge at a different point from focus. The only thing the brain cares about is the alignment, size and similarity of the images on each retina. It doesn’t matter what the focusing distance is. Our brain just totally disregards this disparity… or does it? Since most people seem to be able to perceive 3D looking at 3D movies, we just make that assumption. My guess is that our brain does enable our perception of focus distance but that we suppress the conflict that happens with a 3D movie in the same way we suppress other perception conflicts (think flying in an airplane for example where our inner ear conflicts with what we see). Perhaps some of us aren’t able to suppress this conflict as easily and manifest some sort of discomfort with the experience; like getting a headache or feeling nausea.

However, it is generally thought of as a “given” that this isn’t a big deal and that perception conflicts occur in nature and the brain just “handles it”. I think that is probably true – but I’m not a scientist or doctor and it would be nice to read that my opinion has some basis in true science.  Perhaps it does and someone will comment?

You still might be wondering how a 3D movie decouples focus from convergence. The fact is that it has to do that. The motion picture screen has to be the point of focus at all times. That is the source of the reflected light and the focus point of the projector in the back of the theater. The left eye and right eye images will be offset from each other based upon whether the objects depicted are in front of the screen or going into the screen. The eyes converge or diverge to align the objects on screen but the point of focus is always the screen surface. At this point, I just have to jump up and down and say THAT IS PRETTY AMAZING!!!  Our brain is really incredible and adaptable in a way that we don’t even have to think about.

Stay tuned for more… coming soon ;^)


Filed under 3D, 3D Motion Picture, S3D, stereovision

News? “Science Proves 3-D Movies Hurt Your Brain”

I came across this link:
3D Hurts Your Brain
 to a “news” article titled “Science Proves 3-D Movies Hurt Your Brain” and am compelled to comment.  How they go from the UC Berkeley study that says 3D movies CAN cause eyestrain along with headaches, to a headline that shouts: 3D MOVIES HURT YOUR BRAIN —  is nutty journalism. It is as if “3D movies” are all one thing.  That just isn’t the case at all.

If I told you to hold a pencil six inches away from your nose and stare at the eraser for an hour — guess what? That could cause eyestrain. And if a 3D movie has copious amounts of negative parallax (stuff coming out of the screen) and your eyes are trying to focus on the screen plane but converge much closer, then YES it causes many people to experience eyestrain.  If the scene has a lot of camera shake for those crash and burn scenes your brain will sense motion with your binocular vision but your inner ear will tell the brain you are sitting still. Conflicts of this nature create motion sickness. It is no different than spinning on a merry-go-round where your inner ear has conflicting information to other perceptions as you spin around.

It would have been more productive for the article to state that “poorly implemented 3D camera work can cause eyestrain and motion sickness in the cinema – especially if you sit close to the screen”. Of course, that wouldn’t be a sensational headline. Accurate perhaps, but not sensational.

I think if any alarm bells should be going off, it shouldn’t be about movies. It should be about video games where kids spend hours and hours in front of a computer monitor – soon to be 3D monitor, where perception learning and development can be effected. Again, use the stare at the pencil eraser example. Any atypical eye focus and direction for prolonged periods can’t be good. It is a type of vision therapy which is different from normal “view the world” seeing. Stare at a pencil eraser for hours at a time, day after day and it doesn’t take a brain surgeon to figure out that you might create problems looking at things in the distance. You are “teaching” your eye/brain to constantly focus on a pencil eraser 6″ from your nose, creating a preference for that type of viewing. The eye/brain connection in this case would merely be trying to adapt to a vision requirement. 

In my opinion, 3D content will continue to improve and people will come to understand there is a difference between quality 3D and poor 3D.  Avatar was certainly much better technically than any other 3D movie to date. Can it be better? Yes! Could it be worse? Yes! There are many new technologies for 3D currently in the lab that will be amazing when they eventually come to market in the next five to ten years. Unfortunately, right now there is a bum’s rush to capitalize on Avatar’s success and there are all sorts of companies popping up claiming to do 2D to 3D movie conversions. That means there is going to be a lot of 3D crap coming to market.

Buyer beware!

3D is art and science, not a commodity or ingredient to be added on demand.

Leave a comment

Filed under 1

10 Rules For Good Stereo

This information originates from:
My comments about it are at the end and relate to red color text


Bad Stereo aka Brain Shear
The below is James Cameron’s viewpoint on good stereo as notated by Jon Landau. Many thanks to Chuck Comisky from Lightstorm Entertainment for bringing us this information.

Brain Shear: “the brain’s inability to reconcile the images received by the left and right eyes into a coherent stereo image, which causes it to send corrective messages to the eye muscles, which try to compensate but can’t fix the problems baked into the image on the screen, creating an uncomfortable feedback loop and physical fatigue of eye muscles, which causes the eye muscles to scream at the brain to f— off, at which point the brain decides to fuse the image the hard way, internally, which may take several seconds or not be possible at all — all of which leads to headache and sometimes nausea.”

People will not pay extra for this.

To prevent brain shear you should follow the New Rules of Stereo, also known as…


1) THERE IS NO SCREEN. Whenever somebody starts talking about stuff coming “off the screen”, ignore them. They are charlatans. The brain does not think there’s a screen there at all. It is fooled into thinking there is a window there — a window looking through into an alternate reality. In fact, the brain is barely aware of the boundaries of that window, or of how far away that window is, which is why objects which break the frame edges may be shot at distances closer than the actual screen plane — which classical stereography texts will tell you won’t work. Not only does it work, it is ESSENTIAL to doing good narrative 3D that this old rule be broken as frequently as possible. The exception to the new rule is when doing an “eye-poker” gag. If you’re bringing something very close to the audience’s noses as a featured visual flourish, that object (or the nearer part of it) should not break frame.

2) Stereo is very subjective. No two people process it exactly the same. Dr. Jim of course has the reference eyes, also known as the Calibration Eyes. But it’s important to get a group consensus. We need to please the majority of eyes out there amongst the Great Unwashed.

3) Analyzing stereospace on freeze frames can be misleading. You can work this way, but the final judgment needs to be done with the shots flowing, ideally in the actual cut. Generally they look worse stopped than moving, because the eye gets depth cues from motion as well as parallax. However, excessive strobing caused by the 24P display rate may actually worsen the comfort factor in some shots.

4) Convergence CANNOT fix stereo-space problems. This is critical to remember. Correct convergence does two things and ONLY two things: it allows the eye to fuse very quickly (ideally instantaneously) when cutting from one shot to another. And it can be used to reduce ghosting caused by bleed through of the glasses on high-contrast subjects in the background depth planes. The eye will fuse a given object in frame in direct proportion to how closely converged it is — more converged, faster fusion. You can only converge to one image plane at a time — make sure it is the place the audience (or the majority of the audience) is looking. If it’s Tom Cruise smiling, you know with 99% certainty where they’re looking. If it’s a wide shot with a lot of characters on different depth-planes doing interesting things, your prediction rate goes down.

5) Convergence is almost always set on the subject of greatest interest, and follows the operating paradigm for focus — the eyes of the actor talking. If focus is racked during the shot to another subject, then convergence should rack. An exception to the rule of following focus exactly is a shot with a strongly spread foreground object which is NOT the center of interest (such as in an OTS), in which case a convergence-split may be used (easing the convergence forward slightly, to soften the effect). This should be combined with control of interocular to yield a pleasing result. Convergence splits are limited by high contrast edges at the plane of interest, which may cause ghosting in passive viewing systems.

6) Interocular distance varies in direct proportion to subject distance from the lens: The closer the subject, the smaller the interocular. The farther the larger. A shot of the Grand Canyon from half a mile away may have a 5′ interocular. A shot of a bug from a few inches away may have a 1/4″ interocular. Interocular tolerance is subjective, but there is a constant value of background split which cannot be exceeded.

7) Interocular and convergence should both vary dynamically throughout moving shots.

8) In a composite, the foreground and background may want to have different interoculars. For example, in an OTS, the stereo-space between the two foreground characters may be compressed, and the stereospace in the background not. Conversely, in a problematic greenscreen comp where the interocular was baked in too wide, the background may be brought closer to some extent by shifting one eye horizontally relative to the other. These fixes only work in shots with an empty mid-ground between the foreground elements and the nearest objects in the background. This technique can be used or abused.

9) When stereo looks bad to the eye (visual cortex) it is important to eliminate the possible problems sequentially:

Synch — the number one killer of young eyeballs.

– Reverse-Stereo — this will look equally egregious. Some shots may actually appear to almost work as stereo, but foreground objects will look “cut out”, as if you are looking through a window. Turning the glasses upside down is the test. If it improves, it’s reverse stereo.
NOTE: when a shot is FLOPPED editorially, the L and R eyes must be reversed, or you’ll get reverse stereo.

– Zoom Mismatch (technically it’s focal-length mismatch) — characterized by a radial interference pattern when L-R images are viewed overlaid. This can be a vexing source of brain shear.

Vertical Alignment. The eye can tolerate a lot of horizontal alignment mismatch (this is equivalent to incorrect convergence) but very little vertical misalignment.

– Color or Density Mismatch. The brain is more sensitive to density mismatch than color, but both should be matched.
NOTE: with linear polarization, there will always be a slight magenta/cyan shift between the eyes. This should NOT be corrected in the color timing of the master, because some systems use circular polarization, which doesn’t have this shift.

– Render Errors or element drop-outs between eyes — some actual thing, object, shadow or lighting artifact is missing from one eye.

– Specular Highlights — because the angle of reflection is different for glossy or mirror surfaces as viewed from left or right eyes, highlights may exist in one eye but not the other.

– Lens Flare, matte box shadows — these may strike one lens, not the other.

– Image Warping — this can happen at the edges of frame with certain lenses, and can happen with warped beamsplitters.

– Movement or vibration which is different in L-R. This shows up in some camera systems (not ours). It takes a lot of jiggle between eyes to become apparent.

ONLY when all these possible sources of brain-shear have been eliminated, should inter-ocular be re-examined.

10) Some shots just can’t be fixed. If they are photographic shots with the interocular baked in, they must be re-done or they must be left in the film as non-stereo shots (L-L). If they are CG shots, the interocular can be reduced to a very low value, to give a sense of some stereospace, even though it is inconsistent with the rest of the sequence — in the dramatic flow it will work.

My Comments:

First, lets deal with the words I highlighted in RED.  The comment: “Stereo is subjective” I believe only relates to imagery that is not what we experience in real life.  We don’t see giant heads and we don’t instantly see things from different perspectives. When things are artificial in size and imagery is presented that would never be experienced in real life then I totally agree that the extent to which it is effective is subjective.

Convergence or the crossing of the eyes to triangulate on a represented point in space is achieved by mimicking the movement of the eyes with the cameras. Sometimes referred to as toe-in. There is a lot of disagreement with its use and my take is that we don’t understand all of the ramifications yet. A big issue is that as we triangulate our eyes to look at objects coming towards us, the background doubles and blurs. Our brains block this out and we don’t even notice. However, when you do the same thing in a motion picture it would look very weird and unnatural because you would not be in control of the eye triangulation – the cameras are doing it. So whenever you look at the background it all falls apart. Therefore, the background is shot to not double and go out of focus. This doesn’t exactly simulate a real life visual experience. Here I think the comment about stereo being subjective couldn’t be more true.

Interocular distance is the spacing between the cameras. In real life, it is the distance between your eyes. We don’t think about it or notice it normally. The brain is designed to accept varying interocular distance because it happens in real life. When we are born our eyes are much closer together than they are when we are adults.  Did you ever return back to your grammar school and wonder why the rooms looked so small? And you remembered them being big? Well, when you looked at the room as a child the spacing between your eyes was closer and the room did in fact look bigger as a result. We easily adapt to this as a result.

OTS = “off the screen” which implies negative parallax or crossing of the eyes.

Synch = time displacement where the image in the left eye doesn’t match the same moment in time as the image seen in the right eye. This is the equivalent of 3D heroin. It really messes you up.

Focal length mismatch is where the apparent distance is foreshortened in one eye as compared to the other eye. This is impossible in real life and therefore unpleasant to experience because your brain does not know how to process the imagery.

Vertical alignment can happen both in the production of the movie and at the theater if the theater is using a dual lens projection system. Indeed, I experienced it at an IMAX theater and it gave me eye strain. Your eyes are aligned vertically and they move together as you move your head. It is rare that one eye jumps up onto your eyebrow and changes alignment as compared to the other eye.

Color should be matched? Hmm. Someone should tell the anaglyph fans. The brain can process different colors in each eye but I think logically it makes sense to match the colors presented to each eye since that is what occurs in real life more or less. There ARE slight color shifts due to the interocular distance. Again, nobody seems to have fully studied this. I have been reading about it with regards to plenoptic cameras but the info is pretty sparse.

In general, I am impressed that these items are being discussed by Mr. Cameron and are being considered as part and parcel of his movie Avatar.  But only 10 Rules? I think not. In my opinion this doesn’t even scratch the surface. We have much to learn.


Filed under 3D Photography