Tag Archives: 3D problems

Is there a secret problem with depth maps?


It sounds like a great idea to utilize a depth map to extract information and control the depth depicted in an image or series of images. It sounds great for converting a 2D image into a 3D image. It sounds like a great tool for plenoptic cameras to interpolate the data into imagery with depth. Alpha channels are great to use for transparency mapping – so a depth map should be equally useful, shouldn’t it?

Take a look at this depth map:

icedepthmapThis is a depth map created from a plenoptic camera shot of a bunch of ice bits. It is a grayscale image with 256 shades of gray to depict the parts of the ice that are closer to the camera and the parts of the ice that are farther away from the camera. This information is used to adjust the depth of those bits that are closer and farther away by stretching or compressing pixels.

Now check out a rocking animation that uses motion parallax to depict the depth (items closer to you appear to move differently than items that are farther away).

ice

Right away you can notice a few errors in the depth map, and for complex images this is typical and can be edited and “corrected”. But there is something else. Take a close look at the parts of the image where the depth map is seemingly correct. Sure, you can see the depth but does it really look like ice? If you are like me, the answer is no. Ice reflects and scatters light in a way that is unique for each perspective. Indeed, there IS binocular rivalry where one eye sees light reflection and distortion that is not present in the other eye’s perspective. This disparity tells us something about the texture and makeup of what we are looking at. Stretching or compressing pixels eliminates this information and only provides depth cues relating to the spatial position of things. For most people, I suspect it is reasonable to assume that this creates a perception conflict in their brains. There is something perceptually wrong with the image above. It does not look like ice because the light coming off of the two perspectives looks the same. A depth map does not provide information regarding binocular rivalry and creates errors as a result. Errors that can’t be fixed. Herein you see the flaw in using a depth map. It throws away all of the binocular rivalry information. In other words, it throws away the information between perspectives that is different.

In my opinion, depth maps take the life out of an image. It removes important texture information which, I believe, is gleaned from how light shifts and changes and appears and disappears as you alter perspective.

This is the secret fundamental flaw with depth maps. Now you can subjectively look at the image above and deem it to be cool and otherwise amazing. That is all good and well, but the truth is that, compared with looking at the real ice, it is fundamentally lacking and does not depict what is seen when you look at the ice in real life.

So, people ask themselves if this is important and some will say yes and some will say no. And there are many examples where you could argue both points of view. I don’t have an argument with that. My position is only to point out that this flaw exists and it should not be ignored.

Advertisements

1 Comment

Filed under 1, 3D, 3D Photography, autostereoscopic, S3D, stereopsis, stereovision

Why Stereoscopic 3D Fails


Why doesn’t the world embrace the illusion of S3D?

It is perhaps the most compelling visual effect that exists for cinema. We have many examples of illusions that become giant successful adaptations. What is it about S3D that keeps holding it back and generates considerable negative press and reviews?

In my opinion there isn’t a singular simple explanation. That is probably the reason for varying degrees of failure in a world of elevator pitches and split second decision making.

Two views, while sufficient to create a compelling illusion, do not fully satisfy the confusing foray S3D makes into the blurring of referential imagery and what I call experiential imagery. We are used to seeing multiple perspectives and on being able to converge our eyes and focus on a specific point in space (unless, like this fellow you are crossing your eyes).

We are used to having multiple confirming points of reference substantiating what we are looking at to be what we think we are looking at.

Some of the many simple explanations are:

–when the reason for looking at something isn’t compelling enough to look at it, then it is going to fail to get attention

–many people do not have normal stereovision

–the ability to suppress perception conflicts is not uniform across the population

A more complex explanation:

–requires education in the field of neuroscience.  The following questions must be addressed:
–How does the brain fuse the input from two eyes into a singular image that depicts space?
–Do we perceive the space between things or do we truly experience it with our vision system?

Many scientists argue that our entire visual system is an illusion that the brain creates, and doesn’t represent reality in the way we think that it does.

The mistake of picking a single simple explanation in terms of success or failure is perhaps the reason S3D comes and goes. The saying: “if you build it they will come” does not always hold true.

So, why choose a career in 3D?

Given the above, why would I choose a career creating imagery that depicts the space between things? Because it is possible to create compelling images to view. It is possible to suppress perception conflicts. Many people DO have normal stereovision. Using science and knowledge of neuroscience it is possible to take advantage of the illusions of the brain.

But the main reason I go forward: art.

As a friend of mine is fond of saying: The Earth without art is simply “Eh”.

3 Comments

Filed under 1

That 3D Stuff Is Crap!


This is one of my favorite quotes because it, very handily, catagorizes what I call the pet rock syndrome of 3D imagery. I tend to agree that 3D imagery that is presented as a gimmick is crap. To say that something is good solely on the basis that it can be perceived with depth is ridiculous.

Why do images need to be created with an illusion of depth?

That question rarely gets answered with a thoughtful response. For many months, I asked myself “What has to be in 3D?”.  And upon reflection, the answer is “not much”.  Portraits don’t have to be in 3D. We can infer depth from a traditional portrait very easily – whether a painting or sketch or photograph. Shadows and lighting depicted in a photograph can infer a sense of depth sufficient that a person looking at it doesn’t feel like the image is lacking. It is interesting how many 3D enthusiasts got that wrong, started a 3D portrait business and were surprised when very few people showed up as customers. Hello! Business rule #1 – identify the problem your customer has and how your product solves that problem. Having a 2D portrait isn’t that much of a problem for most people. 

Do we really need to feel that the image we are looking at needs to occupy the same space we are in? The answer is “not usually”. If we did, would there be a bagillion 2D images on the internet? There are exceptions, and that is what I’ve been working on.

It became obvious to me, when I started looking at tattoos, that traditional photography was seriously lacking. Depicting a tattoo with traditional photography IS a problem. Shadows and lighting do not infer a correct sense of depth for images of tattoos. A referential image with only depth clues is very problematical with regards to depicting tattoos. Tattoo art is unique in that it is experienced in the real world within real world space. To flatten it, is to remove the essence of the art itself. What you are left with is an approximation, a reference that is missing context and the sense of space that it occupies.  It wasn’t until I started creating duplicate 2D versions of 3D prints that I realized just how shocking that difference is. The 2D photograph looks lifeless and abstract as compared to the 3D image. But before I go on, I need to quantify that an AMPED 3D image is not just any old 3D image. Years of research and trial and error have shown me that getting a 3D image right is very complicated. Size matters. Lighting matters. Detail matters. Math and learning about how the brain fuses multiple perspectives into a single image with depth  – matters.  Get those things wrong, and a 3D image of a tattoo isn’t very impressive.

Indeed, it has been several months and I feel that I’ve only reached the baseline of what I need to know to make important AMPED 3D images of tattoos.  Understanding the technical stuff, while difficult, was only part of the requirement.  Understanding the story telling and artistry is even more difficult because it is not transferrable from traditional photography dogma and methods. When you employ immersion and space sharing you are entering completely different territory as compared with traditional referential photography.  Traditional photography has no space and is removed from any sense of realness. That facilitates a much simpler set of rules.

What do I mean exactly?

Well, nobody ever gets confused whether or not a photograph or painting is an actual window or mirror reflecting real life.  They know it is not a real mirror or real person standing in the photograph.  A photograph is a photograph — simple.  Today, it is possible to create an AMPED 3D image that calls “realness” into question. People poke it and look at the back and side, confused at where the space is coming from. An AMPED 3D image can emotionally engage in a completely different way from traditional imagery. This is a different experience that is not well understood. But as understanding grows, we will be able to effectively transition from gimmick “crap” to a completely new art form with amazing possibilities. There are glimmers of quality out there, but quality is the exception and not the rule. In my opinion, Hollywood’s obsession with gimmick 3D will impeede its true potential for some time.  3D TV will be even longer because there are simply too many short cuts and a lack of willingness to thoroughly do the work to understand the issues.

To sum it up, 3D is not going to replace traditional imagery. It can’t, precisely because of what makes it desirable in the first place: a sense of realness. A referential 2D image is easy to interpret because it doesn’t look “real”. Conversely, a 3D image will be easer to experience (which is different than interpretation) the more it matches what we perceive in reality. This isn’t a hard and fast rule as there will always be exceptions. But generally, there is a bias to instantly know what one is looking at so as to easily and quickly make a determination about it. Do we interpret the image or experience it as something real? That is the question to which our brain wants an immediate answer. This is a big problem with 3D imagery because we can’t instantly categorize it in the brain. We have to get past accepting the illusion which takes time.  The key to that is the same for all imagery, and that is to present something that people will want to look at in the first place.  Where the bias is to interpret that imagery in a referential way – traditional 2D imagery is going to always win out. Where the bias is to experience and share the space with the image – 3D imagery will be far more desirable (potentially). But the quality of the illusion is a rate limiting piece of the puzzle.

Trust me, it won’t take long for people to tire of seeing the gimmick of stuff flying out of the screen or something appearing to poke them in the eye. If something is going to be in 3D, then there needs to be a compelling reason for it to be in 3D.  Where a sense of immersion and sharing space with the imagery is integral to the experience – 3D is a giant benefit.  Where thought, inference and referential interpretation are important – people are going to prefer traditional flat imagery for the most part.  I say “for the most part” because novelty does play a roll and in many cases novelty can be fun.

It wouldn’t surprise me to see future films featuring both 3D and traditional 2D within the same film. Also, the use of black and white and different types of color and lighting. The sophistication of the general public is growing. The scope of our experience is changing at a much faster rate and people are much more attuned to all sorts of image treatments, styles and levels of sophistication.

Ok, enough rambling… I’d love to hear other people weight in on this! Is novelty enough to drive 3D into the mainstream? Do people really want to see stuff flying at them out of the screen?

Leave a comment

Filed under 1

3D – Present A Different Image To Each Eye – Simple, Right? Wrong!


Those of us with “normal” vision see the world with our two eyes in 3D all of the time. We see the space between things and perceive distance, size, texture, etc.

It’s different when you go to a 3D movie or look at one of my autostereoscopic photographs because you are looking at a flat surface and perceive 3D by way of an optical illusion. There are many subtle differences (and many not so subtle) between normal “seeing” and watching a 3D movie that are important to understand and consider. The biggest difference is that for normal viewing the 3D is “real”. You see an object in front of another object and you perceive the distance. When you reach out to touch the object, that perception is verified. A 3D movie on the other hand is created with an optical illusion. It is made possible because our eyes/brain has an amazing capability to decouple focus from convergence and see and perceive the illusion as if it were real.

Some of you are asking, “What does decouple focus from convergence mean?“.

For normal viewing our eyes focus and converge on the same point in space (the thing we are looking at). Just like separate camera lenses, each eye focuses on an object independently. Then our brain processes those two retinal images into a single image with depth. Because the focusing and merging is done separately, we can perform a trick whereby we fool the brain into processing retinal images that converge at a different point from focus. The only thing the brain cares about is the alignment, size and similarity of the images on each retina. It doesn’t matter what the focusing distance is. Our brain just totally disregards this disparity… or does it? Since most people seem to be able to perceive 3D looking at 3D movies, we just make that assumption. My guess is that our brain does enable our perception of focus distance but that we suppress the conflict that happens with a 3D movie in the same way we suppress other perception conflicts (think flying in an airplane for example where our inner ear conflicts with what we see). Perhaps some of us aren’t able to suppress this conflict as easily and manifest some sort of discomfort with the experience; like getting a headache or feeling nausea.

However, it is generally thought of as a “given” that this isn’t a big deal and that perception conflicts occur in nature and the brain just “handles it”. I think that is probably true – but I’m not a scientist or doctor and it would be nice to read that my opinion has some basis in true science.  Perhaps it does and someone will comment?

You still might be wondering how a 3D movie decouples focus from convergence. The fact is that it has to do that. The motion picture screen has to be the point of focus at all times. That is the source of the reflected light and the focus point of the projector in the back of the theater. The left eye and right eye images will be offset from each other based upon whether the objects depicted are in front of the screen or going into the screen. The eyes converge or diverge to align the objects on screen but the point of focus is always the screen surface. At this point, I just have to jump up and down and say THAT IS PRETTY AMAZING!!!  Our brain is really incredible and adaptable in a way that we don’t even have to think about.

Stay tuned for more… coming soon ;^)

3 Comments

Filed under 3D, 3D Motion Picture, S3D, stereovision

We will be exhibiting at the “It’s a baby & family expo” in Boston March 20 & 21st


Come visit us at the “It’s a baby & family expo” at the Boston Bayside Expo Center this weekend (10AM – 5PM on Saturday and 11AM – 5PM on Sunday March 20-21, 2010) where we will be showing our latest multiperspective 3D life size photographs of adorable babies. We will have many show specials and special discounts for attendees. Info on the show is available at www.itsababyexpo.com

Yesterday, we did a very small baby show event at a hotel as a trial run to gauge the level of enthusiasm for our photographic technology. We were very pleased with how people reacted to our special photographs. Indeed, the level of excitement was so high that I was convinced we needed to exhibit at a larger show and therefore booked the expo in Boston. 

As part of the expo in Boston, we will be asking people to explain what they see on our webcam. We will edit and upload the video to our website after the show for people to watch and vote for the best explanation of what they see. Surprisingly, I’ve found a broad range of reaction to the baby photos and think it will be very interesting to evaluate all of the responses at a tradeshow setting.

If you have ideas for other questions please comment below. So far, I plan to only ask people to describe what they see when they look at the photograph.  Here is an example of what someone said at the show event yesterday:

I expect that not everyone will have a glowing reaction to the photographs. I’m really keen to hear what stereoblind people have to say about them and also people who perhaps have some vision difficulties with regards to fusion. Perhaps some just might not like them because they are 3D and don’t think photos should be in 3D. Reminds me of people who argued that movies shouldn’t have sound. Somewhere there is always a critic with a different point of view.  Surprisingly, to date everyone has had positive things to say.

I’m looking forward to your comments regarding questions I should ask. Thanks!

Leave a comment

Filed under 1, 3D Baby Photography

News? “Science Proves 3-D Movies Hurt Your Brain”


I came across this link:
3D Hurts Your Brain
 to a “news” article titled “Science Proves 3-D Movies Hurt Your Brain” and am compelled to comment.  How they go from the UC Berkeley study that says 3D movies CAN cause eyestrain along with headaches, to a headline that shouts: 3D MOVIES HURT YOUR BRAIN —  is nutty journalism. It is as if “3D movies” are all one thing.  That just isn’t the case at all.

If I told you to hold a pencil six inches away from your nose and stare at the eraser for an hour — guess what? That could cause eyestrain. And if a 3D movie has copious amounts of negative parallax (stuff coming out of the screen) and your eyes are trying to focus on the screen plane but converge much closer, then YES it causes many people to experience eyestrain.  If the scene has a lot of camera shake for those crash and burn scenes your brain will sense motion with your binocular vision but your inner ear will tell the brain you are sitting still. Conflicts of this nature create motion sickness. It is no different than spinning on a merry-go-round where your inner ear has conflicting information to other perceptions as you spin around.

It would have been more productive for the article to state that “poorly implemented 3D camera work can cause eyestrain and motion sickness in the cinema – especially if you sit close to the screen”. Of course, that wouldn’t be a sensational headline. Accurate perhaps, but not sensational.

I think if any alarm bells should be going off, it shouldn’t be about movies. It should be about video games where kids spend hours and hours in front of a computer monitor – soon to be 3D monitor, where perception learning and development can be effected. Again, use the stare at the pencil eraser example. Any atypical eye focus and direction for prolonged periods can’t be good. It is a type of vision therapy which is different from normal “view the world” seeing. Stare at a pencil eraser for hours at a time, day after day and it doesn’t take a brain surgeon to figure out that you might create problems looking at things in the distance. You are “teaching” your eye/brain to constantly focus on a pencil eraser 6″ from your nose, creating a preference for that type of viewing. The eye/brain connection in this case would merely be trying to adapt to a vision requirement. 

In my opinion, 3D content will continue to improve and people will come to understand there is a difference between quality 3D and poor 3D.  Avatar was certainly much better technically than any other 3D movie to date. Can it be better? Yes! Could it be worse? Yes! There are many new technologies for 3D currently in the lab that will be amazing when they eventually come to market in the next five to ten years. Unfortunately, right now there is a bum’s rush to capitalize on Avatar’s success and there are all sorts of companies popping up claiming to do 2D to 3D movie conversions. That means there is going to be a lot of 3D crap coming to market.

Buyer beware!

3D is art and science, not a commodity or ingredient to be added on demand.

Leave a comment

Filed under 1

10 Rules For Good Stereo


This information originates from:
(
http://fullres.blogspot.com/2009/07/bad-stereo-aka-brain-shear.html)
My comments about it are at the end and relate to red color text

———————————————————–

Bad Stereo aka Brain Shear
 
The below is James Cameron’s viewpoint on good stereo as notated by Jon Landau. Many thanks to Chuck Comisky from Lightstorm Entertainment for bringing us this information.

Brain Shear: “the brain’s inability to reconcile the images received by the left and right eyes into a coherent stereo image, which causes it to send corrective messages to the eye muscles, which try to compensate but can’t fix the problems baked into the image on the screen, creating an uncomfortable feedback loop and physical fatigue of eye muscles, which causes the eye muscles to scream at the brain to f— off, at which point the brain decides to fuse the image the hard way, internally, which may take several seconds or not be possible at all — all of which leads to headache and sometimes nausea.”

People will not pay extra for this.

To prevent brain shear you should follow the New Rules of Stereo, also known as…

10 RULES FOR GOOD STEREO

1) THERE IS NO SCREEN. Whenever somebody starts talking about stuff coming “off the screen”, ignore them. They are charlatans. The brain does not think there’s a screen there at all. It is fooled into thinking there is a window there — a window looking through into an alternate reality. In fact, the brain is barely aware of the boundaries of that window, or of how far away that window is, which is why objects which break the frame edges may be shot at distances closer than the actual screen plane — which classical stereography texts will tell you won’t work. Not only does it work, it is ESSENTIAL to doing good narrative 3D that this old rule be broken as frequently as possible. The exception to the new rule is when doing an “eye-poker” gag. If you’re bringing something very close to the audience’s noses as a featured visual flourish, that object (or the nearer part of it) should not break frame.

2) Stereo is very subjective. No two people process it exactly the same. Dr. Jim of course has the reference eyes, also known as the Calibration Eyes. But it’s important to get a group consensus. We need to please the majority of eyes out there amongst the Great Unwashed.

3) Analyzing stereospace on freeze frames can be misleading. You can work this way, but the final judgment needs to be done with the shots flowing, ideally in the actual cut. Generally they look worse stopped than moving, because the eye gets depth cues from motion as well as parallax. However, excessive strobing caused by the 24P display rate may actually worsen the comfort factor in some shots.

4) Convergence CANNOT fix stereo-space problems. This is critical to remember. Correct convergence does two things and ONLY two things: it allows the eye to fuse very quickly (ideally instantaneously) when cutting from one shot to another. And it can be used to reduce ghosting caused by bleed through of the glasses on high-contrast subjects in the background depth planes. The eye will fuse a given object in frame in direct proportion to how closely converged it is — more converged, faster fusion. You can only converge to one image plane at a time — make sure it is the place the audience (or the majority of the audience) is looking. If it’s Tom Cruise smiling, you know with 99% certainty where they’re looking. If it’s a wide shot with a lot of characters on different depth-planes doing interesting things, your prediction rate goes down.

5) Convergence is almost always set on the subject of greatest interest, and follows the operating paradigm for focus — the eyes of the actor talking. If focus is racked during the shot to another subject, then convergence should rack. An exception to the rule of following focus exactly is a shot with a strongly spread foreground object which is NOT the center of interest (such as in an OTS), in which case a convergence-split may be used (easing the convergence forward slightly, to soften the effect). This should be combined with control of interocular to yield a pleasing result. Convergence splits are limited by high contrast edges at the plane of interest, which may cause ghosting in passive viewing systems.

6) Interocular distance varies in direct proportion to subject distance from the lens: The closer the subject, the smaller the interocular. The farther the larger. A shot of the Grand Canyon from half a mile away may have a 5′ interocular. A shot of a bug from a few inches away may have a 1/4″ interocular. Interocular tolerance is subjective, but there is a constant value of background split which cannot be exceeded.

7) Interocular and convergence should both vary dynamically throughout moving shots.

8) In a composite, the foreground and background may want to have different interoculars. For example, in an OTS, the stereo-space between the two foreground characters may be compressed, and the stereospace in the background not. Conversely, in a problematic greenscreen comp where the interocular was baked in too wide, the background may be brought closer to some extent by shifting one eye horizontally relative to the other. These fixes only work in shots with an empty mid-ground between the foreground elements and the nearest objects in the background. This technique can be used or abused.

9) When stereo looks bad to the eye (visual cortex) it is important to eliminate the possible problems sequentially:

Synch — the number one killer of young eyeballs.

– Reverse-Stereo — this will look equally egregious. Some shots may actually appear to almost work as stereo, but foreground objects will look “cut out”, as if you are looking through a window. Turning the glasses upside down is the test. If it improves, it’s reverse stereo.
NOTE: when a shot is FLOPPED editorially, the L and R eyes must be reversed, or you’ll get reverse stereo.

– Zoom Mismatch (technically it’s focal-length mismatch) — characterized by a radial interference pattern when L-R images are viewed overlaid. This can be a vexing source of brain shear.

Vertical Alignment. The eye can tolerate a lot of horizontal alignment mismatch (this is equivalent to incorrect convergence) but very little vertical misalignment.

– Color or Density Mismatch. The brain is more sensitive to density mismatch than color, but both should be matched.
NOTE: with linear polarization, there will always be a slight magenta/cyan shift between the eyes. This should NOT be corrected in the color timing of the master, because some systems use circular polarization, which doesn’t have this shift.

– Render Errors or element drop-outs between eyes — some actual thing, object, shadow or lighting artifact is missing from one eye.

– Specular Highlights — because the angle of reflection is different for glossy or mirror surfaces as viewed from left or right eyes, highlights may exist in one eye but not the other.

– Lens Flare, matte box shadows — these may strike one lens, not the other.

– Image Warping — this can happen at the edges of frame with certain lenses, and can happen with warped beamsplitters.

– Movement or vibration which is different in L-R. This shows up in some camera systems (not ours). It takes a lot of jiggle between eyes to become apparent.

ONLY when all these possible sources of brain-shear have been eliminated, should inter-ocular be re-examined.

10) Some shots just can’t be fixed. If they are photographic shots with the interocular baked in, they must be re-done or they must be left in the film as non-stereo shots (L-L). If they are CG shots, the interocular can be reduced to a very low value, to give a sense of some stereospace, even though it is inconsistent with the rest of the sequence — in the dramatic flow it will work.

My Comments:

First, lets deal with the words I highlighted in RED.  The comment: “Stereo is subjective” I believe only relates to imagery that is not what we experience in real life.  We don’t see giant heads and we don’t instantly see things from different perspectives. When things are artificial in size and imagery is presented that would never be experienced in real life then I totally agree that the extent to which it is effective is subjective.

Convergence or the crossing of the eyes to triangulate on a represented point in space is achieved by mimicking the movement of the eyes with the cameras. Sometimes referred to as toe-in. There is a lot of disagreement with its use and my take is that we don’t understand all of the ramifications yet. A big issue is that as we triangulate our eyes to look at objects coming towards us, the background doubles and blurs. Our brains block this out and we don’t even notice. However, when you do the same thing in a motion picture it would look very weird and unnatural because you would not be in control of the eye triangulation – the cameras are doing it. So whenever you look at the background it all falls apart. Therefore, the background is shot to not double and go out of focus. This doesn’t exactly simulate a real life visual experience. Here I think the comment about stereo being subjective couldn’t be more true.

Interocular distance is the spacing between the cameras. In real life, it is the distance between your eyes. We don’t think about it or notice it normally. The brain is designed to accept varying interocular distance because it happens in real life. When we are born our eyes are much closer together than they are when we are adults.  Did you ever return back to your grammar school and wonder why the rooms looked so small? And you remembered them being big? Well, when you looked at the room as a child the spacing between your eyes was closer and the room did in fact look bigger as a result. We easily adapt to this as a result.

OTS = “off the screen” which implies negative parallax or crossing of the eyes.

Synch = time displacement where the image in the left eye doesn’t match the same moment in time as the image seen in the right eye. This is the equivalent of 3D heroin. It really messes you up.

Focal length mismatch is where the apparent distance is foreshortened in one eye as compared to the other eye. This is impossible in real life and therefore unpleasant to experience because your brain does not know how to process the imagery.

Vertical alignment can happen both in the production of the movie and at the theater if the theater is using a dual lens projection system. Indeed, I experienced it at an IMAX theater and it gave me eye strain. Your eyes are aligned vertically and they move together as you move your head. It is rare that one eye jumps up onto your eyebrow and changes alignment as compared to the other eye.

Color should be matched? Hmm. Someone should tell the anaglyph fans. The brain can process different colors in each eye but I think logically it makes sense to match the colors presented to each eye since that is what occurs in real life more or less. There ARE slight color shifts due to the interocular distance. Again, nobody seems to have fully studied this. I have been reading about it with regards to plenoptic cameras but the info is pretty sparse.

In general, I am impressed that these items are being discussed by Mr. Cameron and are being considered as part and parcel of his movie Avatar.  But only 10 Rules? I think not. In my opinion this doesn’t even scratch the surface. We have much to learn.

2 Comments

Filed under 3D Photography