This information originates from:
My comments about it are at the end and relate to red color text
Bad Stereo aka Brain Shear
The below is James Cameron’s viewpoint on good stereo as notated by Jon Landau. Many thanks to Chuck Comisky from Lightstorm Entertainment for bringing us this information.
Brain Shear: “the brain’s inability to reconcile the images received by the left and right eyes into a coherent stereo image, which causes it to send corrective messages to the eye muscles, which try to compensate but can’t fix the problems baked into the image on the screen, creating an uncomfortable feedback loop and physical fatigue of eye muscles, which causes the eye muscles to scream at the brain to f— off, at which point the brain decides to fuse the image the hard way, internally, which may take several seconds or not be possible at all — all of which leads to headache and sometimes nausea.”
People will not pay extra for this.
To prevent brain shear you should follow the New Rules of Stereo, also known as…
10 RULES FOR GOOD STEREO
1) THERE IS NO SCREEN. Whenever somebody starts talking about stuff coming “off the screen”, ignore them. They are charlatans. The brain does not think there’s a screen there at all. It is fooled into thinking there is a window there — a window looking through into an alternate reality. In fact, the brain is barely aware of the boundaries of that window, or of how far away that window is, which is why objects which break the frame edges may be shot at distances closer than the actual screen plane — which classical stereography texts will tell you won’t work. Not only does it work, it is ESSENTIAL to doing good narrative 3D that this old rule be broken as frequently as possible. The exception to the new rule is when doing an “eye-poker” gag. If you’re bringing something very close to the audience’s noses as a featured visual flourish, that object (or the nearer part of it) should not break frame.
2) Stereo is very subjective. No two people process it exactly the same. Dr. Jim of course has the reference eyes, also known as the Calibration Eyes. But it’s important to get a group consensus. We need to please the majority of eyes out there amongst the Great Unwashed.
3) Analyzing stereospace on freeze frames can be misleading. You can work this way, but the final judgment needs to be done with the shots flowing, ideally in the actual cut. Generally they look worse stopped than moving, because the eye gets depth cues from motion as well as parallax. However, excessive strobing caused by the 24P display rate may actually worsen the comfort factor in some shots.
4) Convergence CANNOT fix stereo-space problems. This is critical to remember. Correct convergence does two things and ONLY two things: it allows the eye to fuse very quickly (ideally instantaneously) when cutting from one shot to another. And it can be used to reduce ghosting caused by bleed through of the glasses on high-contrast subjects in the background depth planes. The eye will fuse a given object in frame in direct proportion to how closely converged it is — more converged, faster fusion. You can only converge to one image plane at a time — make sure it is the place the audience (or the majority of the audience) is looking. If it’s Tom Cruise smiling, you know with 99% certainty where they’re looking. If it’s a wide shot with a lot of characters on different depth-planes doing interesting things, your prediction rate goes down.
5) Convergence is almost always set on the subject of greatest interest, and follows the operating paradigm for focus — the eyes of the actor talking. If focus is racked during the shot to another subject, then convergence should rack. An exception to the rule of following focus exactly is a shot with a strongly spread foreground object which is NOT the center of interest (such as in an OTS), in which case a convergence-split may be used (easing the convergence forward slightly, to soften the effect). This should be combined with control of interocular to yield a pleasing result. Convergence splits are limited by high contrast edges at the plane of interest, which may cause ghosting in passive viewing systems.
6) Interocular distance varies in direct proportion to subject distance from the lens: The closer the subject, the smaller the interocular. The farther the larger. A shot of the Grand Canyon from half a mile away may have a 5′ interocular. A shot of a bug from a few inches away may have a 1/4″ interocular. Interocular tolerance is subjective, but there is a constant value of background split which cannot be exceeded.
7) Interocular and convergence should both vary dynamically throughout moving shots.
8) In a composite, the foreground and background may want to have different interoculars. For example, in an OTS, the stereo-space between the two foreground characters may be compressed, and the stereospace in the background not. Conversely, in a problematic greenscreen comp where the interocular was baked in too wide, the background may be brought closer to some extent by shifting one eye horizontally relative to the other. These fixes only work in shots with an empty mid-ground between the foreground elements and the nearest objects in the background. This technique can be used or abused.
9) When stereo looks bad to the eye (visual cortex) it is important to eliminate the possible problems sequentially:
– Synch — the number one killer of young eyeballs.
– Reverse-Stereo — this will look equally egregious. Some shots may actually appear to almost work as stereo, but foreground objects will look “cut out”, as if you are looking through a window. Turning the glasses upside down is the test. If it improves, it’s reverse stereo.
NOTE: when a shot is FLOPPED editorially, the L and R eyes must be reversed, or you’ll get reverse stereo.
– Zoom Mismatch (technically it’s focal-length mismatch) — characterized by a radial interference pattern when L-R images are viewed overlaid. This can be a vexing source of brain shear.
– Vertical Alignment. The eye can tolerate a lot of horizontal alignment mismatch (this is equivalent to incorrect convergence) but very little vertical misalignment.
– Color or Density Mismatch. The brain is more sensitive to density mismatch than color, but both should be matched.
NOTE: with linear polarization, there will always be a slight magenta/cyan shift between the eyes. This should NOT be corrected in the color timing of the master, because some systems use circular polarization, which doesn’t have this shift.
– Render Errors or element drop-outs between eyes — some actual thing, object, shadow or lighting artifact is missing from one eye.
– Specular Highlights — because the angle of reflection is different for glossy or mirror surfaces as viewed from left or right eyes, highlights may exist in one eye but not the other.
– Lens Flare, matte box shadows — these may strike one lens, not the other.
– Image Warping — this can happen at the edges of frame with certain lenses, and can happen with warped beamsplitters.
– Movement or vibration which is different in L-R. This shows up in some camera systems (not ours). It takes a lot of jiggle between eyes to become apparent.
ONLY when all these possible sources of brain-shear have been eliminated, should inter-ocular be re-examined.
10) Some shots just can’t be fixed. If they are photographic shots with the interocular baked in, they must be re-done or they must be left in the film as non-stereo shots (L-L). If they are CG shots, the interocular can be reduced to a very low value, to give a sense of some stereospace, even though it is inconsistent with the rest of the sequence — in the dramatic flow it will work.
First, lets deal with the words I highlighted in RED. The comment: “Stereo is subjective” I believe only relates to imagery that is not what we experience in real life. We don’t see giant heads and we don’t instantly see things from different perspectives. When things are artificial in size and imagery is presented that would never be experienced in real life then I totally agree that the extent to which it is effective is subjective.
Convergence or the crossing of the eyes to triangulate on a represented point in space is achieved by mimicking the movement of the eyes with the cameras. Sometimes referred to as toe-in. There is a lot of disagreement with its use and my take is that we don’t understand all of the ramifications yet. A big issue is that as we triangulate our eyes to look at objects coming towards us, the background doubles and blurs. Our brains block this out and we don’t even notice. However, when you do the same thing in a motion picture it would look very weird and unnatural because you would not be in control of the eye triangulation – the cameras are doing it. So whenever you look at the background it all falls apart. Therefore, the background is shot to not double and go out of focus. This doesn’t exactly simulate a real life visual experience. Here I think the comment about stereo being subjective couldn’t be more true.
Interocular distance is the spacing between the cameras. In real life, it is the distance between your eyes. We don’t think about it or notice it normally. The brain is designed to accept varying interocular distance because it happens in real life. When we are born our eyes are much closer together than they are when we are adults. Did you ever return back to your grammar school and wonder why the rooms looked so small? And you remembered them being big? Well, when you looked at the room as a child the spacing between your eyes was closer and the room did in fact look bigger as a result. We easily adapt to this as a result.
OTS = “off the screen” which implies negative parallax or crossing of the eyes.
Synch = time displacement where the image in the left eye doesn’t match the same moment in time as the image seen in the right eye. This is the equivalent of 3D heroin. It really messes you up.
Focal length mismatch is where the apparent distance is foreshortened in one eye as compared to the other eye. This is impossible in real life and therefore unpleasant to experience because your brain does not know how to process the imagery.
Vertical alignment can happen both in the production of the movie and at the theater if the theater is using a dual lens projection system. Indeed, I experienced it at an IMAX theater and it gave me eye strain. Your eyes are aligned vertically and they move together as you move your head. It is rare that one eye jumps up onto your eyebrow and changes alignment as compared to the other eye.
Color should be matched? Hmm. Someone should tell the anaglyph fans. The brain can process different colors in each eye but I think logically it makes sense to match the colors presented to each eye since that is what occurs in real life more or less. There ARE slight color shifts due to the interocular distance. Again, nobody seems to have fully studied this. I have been reading about it with regards to plenoptic cameras but the info is pretty sparse.
In general, I am impressed that these items are being discussed by Mr. Cameron and are being considered as part and parcel of his movie Avatar. But only 10 Rules? I think not. In my opinion this doesn’t even scratch the surface. We have much to learn.