Tag Archives: multi perspective imagery

Is there a secret problem with depth maps?

It sounds like a great idea to utilize a depth map to extract information and control the depth depicted in an image or series of images. It sounds great for converting a 2D image into a 3D image. It sounds like a great tool for plenoptic cameras to interpolate the data into imagery with depth. Alpha channels are great to use for transparency mapping – so a depth map should be equally useful, shouldn’t it?

Take a look at this depth map:

icedepthmapThis is a depth map created from a plenoptic camera shot of a bunch of ice bits. It is a grayscale image with 256 shades of gray to depict the parts of the ice that are closer to the camera and the parts of the ice that are farther away from the camera. This information is used to adjust the depth of those bits that are closer and farther away by stretching or compressing pixels.

Now check out a rocking animation that uses motion parallax to depict the depth (items closer to you appear to move differently than items that are farther away).


Right away you can notice a few errors in the depth map, and for complex images this is typical and can be edited and “corrected”. But there is something else. Take a close look at the parts of the image where the depth map is seemingly correct. Sure, you can see the depth but does it really look like ice? If you are like me, the answer is no. Ice reflects and scatters light in a way that is unique for each perspective. Indeed, there IS binocular rivalry where one eye sees light reflection and distortion that is not present in the other eye’s perspective. This disparity tells us something about the texture and makeup of what we are looking at. Stretching or compressing pixels eliminates this information and only provides depth cues relating to the spatial position of things. For most people, I suspect it is reasonable to assume that this creates a perception conflict in their brains. There is something perceptually wrong with the image above. It does not look like ice because the light coming off of the two perspectives looks the same. A depth map does not provide information regarding binocular rivalry and creates errors as a result. Errors that can’t be fixed. Herein you see the flaw in using a depth map. It throws away all of the binocular rivalry information. In other words, it throws away the information between perspectives that is different.

In my opinion, depth maps take the life out of an image. It removes important texture information which, I believe, is gleaned from how light shifts and changes and appears and disappears as you alter perspective.

This is the secret fundamental flaw with depth maps. Now you can subjectively look at the image above and deem it to be cool and otherwise amazing. That is all good and well, but the truth is that, compared with looking at the real ice, it is fundamentally lacking and does not depict what is seen when you look at the ice in real life.

So, people ask themselves if this is important and some will say yes and some will say no. And there are many examples where you could argue both points of view. I don’t have an argument with that. My position is only to point out that this flaw exists and it should not be ignored.


1 Comment

Filed under 1, 3D, 3D Photography, autostereoscopic, S3D, stereopsis, stereovision

AMPED 3D / AMPED360 & Almont Studios – Coming Soon!

Everything always takes longer than you plan. Close to seven years ago, I believed I would be where I am now in about six to eight months. Being years off schedule is certainly reason for companies to go out of business and I’ve been close a few times. But what keeps me going is the incredible results and potential for autostereoscopic multi perspective extended (high) dynamics imagery (aka AMPED 3D).

The recently added rotational aspect of my photography has added yet another exciting component to what is possible. That, and the introduction of nano technology manufacturing techniques in the creation of integral lens material. We are now in striking distance for large high resolution displays with directional lens overlays that offer stunning multi perspective imagery with projected light. A simple knob that makes user controlled rotation possible is simple and at the same time complex in terms of the amount of information a viewer can attain in a very short time. But there is an unexpected twist. The smart phone. Going smaller can be just as compelling as going larger in ways that I am only starting to understand.

But lets back up a second.

Part of the reason this integration can be so easy is that experiential imagery is intuitive. We do it every day in real life. But we have always kept referential imagery (traditional paintings and photographs) separate from experiential imagery. The reasons are easy to understand. It is insanely difficult to recreate what we see in real life in an artificial way.  The good news is that it is becoming easier and affordable.  These facts have literally forced me to not give up and to keep pushing because there is a tremendous future ahead in terms of setting new standards for imagery and viewing possibilities.

A recent breakthrough; The Oculus Rift http://www.oculusvr.com is blowing people away in terms of what is starting to be possible. Experiential imagery is coming on the scene like a tsunami and artists are not prepared. The “rules” for referential imagery simply don’t work for experiential imagery. It isn’t a visual effect, it is completely new  and different in ways that finger out like branches on a tree. Experiential imagery is processed in different parts of the brain, much of which is in the subconscious with instinctual interpretation (emotional).  The closest metaphor or comparison I can think of is watching a Broadway play as compared to watching a movie. You can truly experience a play with live actors on a stage in ways that are impossible within the confines of a motion picture. However, you are constricted. You aren’t part of the play, you are in the audience. New technology changes that in ways that are huge to think about.

The integral lens I mentioned above has slowed me down on the project I’m working on with Alex Grey. However, stay tuned as what I am exploring will certainly be worth the wait. The New York City Tattoo Convention also has given me a huge boost in terms of testing the rotational imaging model. The results met my expectations and now that business can move forward.

These are exciting times. Prepare to enjoy!


Filed under 1

Two Perspective 3D Vs. Multi Perspective

Do we really only see one perspective in each eye? The answer is NO and that is one of the many flaws inherent in 3D cinema and 3D televisions. We see multiple perspectives because we move our head and body as we breath. Watching 3D content with only two perspectives inherently creates perception conflicts as you move your head because the perspectives do not change with that movement. It is very weird and unnatural to see objects in 3D space move as your head moves. In real life, the world does not move with your head. You are able to look around objects and use motion parallax to clarify ambiguities that can’t be rectified with stereovision alone.

I read a great deal about the concern regarding focus and convergence being separated, but very little with regard to the lack of multiple perspectives.

Last night while walking the dog and looking at the trees I saw a perfect example of what motion parallax and multiple perspectives provide. As it got darker, the tree branches became less detailed and more like dark silhouettes. They would appear as cutouts against the dark sky would it not be for motion parallax, which provided the added depth cues for my brain to latch on to and interpret the space between the branches.

The above is certainly good reason for 3D cinema photographers to include dolly shots whenever possible, as some motion parallax, even if separated from head movement, is better than a static shot which obviates the benefits of motion parallax.

I am now shooting 36 perspective whenever possible to provide, with my AMPED 3D process, the ability to see a different perspective with subtle movements of the head. I’d like to have even more perspectives as I have discovered that more is better! However, artificially constructed “tweened” perspectives are NOT the solution in my humble opinion. Too much information is lost, and the imagery is simply artificial. The current solutions are more cameras or lens array cameras (plenoptic). We are still a ways away from an elegant solution.

Leave a comment

Filed under 1

Perspective Interpolation – Specularity and Refraction Problems

So, how about converting 2D to 3D or converting two perspective 3D into multi-perspective autostereoscopic… Technology certainly should easily make that possible, right?

The answer is a bit complicated. Because for some images it is quite possible to achieve excellent results. Unfortunately, for many images and scenes it truly is impossible to create accurate 3D from 2D and/or interpolate additional perspectives for autostereoscopic displays.

Case in point? Look at the animation below:

In the background painting there are tiny bits of highly reflective particles embedded in oil paint. These dots of light reflect bright points of light depending upon the perspective. They “come on” quickly as you change perspective because of the paint occlusion where you see them in one eye but not the other. Any program that interpolates views would not know what to do with a picture like this. Morph the dots of light? In real life, they don’t morph, they pop on with the light brightening as the perspective angle changes.

Now, take a close look at the glass gems. Notice how their specularity is influenced by the perspective position relative to the background?  Notice the refraction as you see the background through the transparent glass. Unless you modeled the gems in a 3D program and rendered them, there would be no way to interpolate with a pixel warping program what is going on with the look and texture of these gems as they change perspective.

What happens typically with a conversion is an abysmal mess for items with specularity and refraction. It looks 3D for sure – but in no way is representative of reality. And this is the conundrum. There is no uniformity or consistency with regards to 2D to 3D conversions or 2 perspective to multi perspective conversions. It is completely content based and the results are dependent upon the subject matter.

Binocular disparity and as this example demonstrates, binocular rivalry where one perspective contains elements not visible in the other perspective create monumental problems for conversion.

The solution? Shoot multiple perspectives. And this is the path that I have been forced to take to create consistent and uniform results. Indeed, fewer than 10 perspectives does not yield quality, uniform results in my humble opinion. Can fewer than 10 perspectives work? The answer is yes if what you are photographing has no specularity or refraction properties and the texture is smooth and uniform. But as an artist, I find that restriction way to limiting and live in a world that consists mostly of refractive material (water) and glass and gems and metals. Indeed, just look around and the world is filled with specular and refractive content.

Even portraits pose a problem because unless the person has extremely dry eyes, they glisten as the moisture that coats the eye creates specularity and refraction. Of course, if you don’t have a close up or reduce the resolution then it isn’t that noticeable. But here again, as an artist I find that too limiting.

I do not understand the willingness of people to ignore these problems. While it is true that in many cases specularity and refraction are subtle and nuanced. But given that 3D mimics the way we see real life, shouldn’t 3D be subtle and nuanced? Perhaps the gross over emphasized poke you in the eye effects are doing the potential of 3D a disservice?

That’s my view. But what do I know?


Filed under 3D, 3D Photography, autostereoscopic, S3D, stereopsis

Why use more than 2 perspectives for a 3D image?

Every time I go out with my camera rig I get asked the same question: “why do you have so many cameras?”.

The answer is actually somewhat complicated. A 3D motion picture is created with two cameras and each camera’s image is presented discretely to each eye by the use of polarized filters or in the case of home 3D capable TV’s they would be shutter glasses in many cases.

If you want glasses-free 3D viewing then the ability to discretely show a single image to each eye is more difficult. And if you can move while looking at the photograph then something weird happens. Try this at your next 3D movie! Get up and walk around while looking at the 3D movie. The images follow you around because your perspective in space is fixed. It is a very weird sensation.

If you aren’t fixed to a chair and are walking around then you have a constantly changing perspective in real life. Holograms provide a pure seamless array of perspectives. A fantastic feature but as I have mentioned elsewhere on this site, holograms are very limited in terms of the pictures you can take and do not have real world colors or any color accuracy at this point in development.  A lenticular lens array approach makes it possible to direct different images to each eye (that’s what the lens on the print overlay does). The smoothness of transition from one perspective to the next is a direct function of the number of cameras you have. More cameras provide a smoother transition from perspective to perspective. As you look at a lenticular and move, you see a change in perspective that can compare to what you see in real life. (In my photographs, I take great pains to make sure that is the case – but practically everyone else does not consider this important). If you are going for an effect then I suppose it isn’t important. But if you are trying to emulate reality then it is extremely important.

Providing the experience of multiple perspectives greatly enhances the realism of a 3D image and why I believe the 2 perspective approach used at the movies will eventually go away as it is more limiting.

Want to know more? Comment with questions and I’ll be glad to tell you what I have found out and experienced through trial and error.


Filed under 3D Photography