Tag Archives: 2D to 3D conversion

Is there a secret problem with depth maps?


It sounds like a great idea to utilize a depth map to extract information and control the depth depicted in an image or series of images. It sounds great for converting a 2D image into a 3D image. It sounds like a great tool for plenoptic cameras to interpolate the data into imagery with depth. Alpha channels are great to use for transparency mapping – so a depth map should be equally useful, shouldn’t it?

Take a look at this depth map:

icedepthmapThis is a depth map created from a plenoptic camera shot of a bunch of ice bits. It is a grayscale image with 256 shades of gray to depict the parts of the ice that are closer to the camera and the parts of the ice that are farther away from the camera. This information is used to adjust the depth of those bits that are closer and farther away by stretching or compressing pixels.

Now check out a rocking animation that uses motion parallax to depict the depth (items closer to you appear to move differently than items that are farther away).

ice

Right away you can notice a few errors in the depth map, and for complex images this is typical and can be edited and “corrected”. But there is something else. Take a close look at the parts of the image where the depth map is seemingly correct. Sure, you can see the depth but does it really look like ice? If you are like me, the answer is no. Ice reflects and scatters light in a way that is unique for each perspective. Indeed, there IS binocular rivalry where one eye sees light reflection and distortion that is not present in the other eye’s perspective. This disparity tells us something about the texture and makeup of what we are looking at. Stretching or compressing pixels eliminates this information and only provides depth cues relating to the spatial position of things. For most people, I suspect it is reasonable to assume that this creates a perception conflict in their brains. There is something perceptually wrong with the image above. It does not look like ice because the light coming off of the two perspectives looks the same. A depth map does not provide information regarding binocular rivalry and creates errors as a result. Errors that can’t be fixed. Herein you see the flaw in using a depth map. It throws away all of the binocular rivalry information. In other words, it throws away the information between perspectives that is different.

In my opinion, depth maps take the life out of an image. It removes important texture information which, I believe, is gleaned from how light shifts and changes and appears and disappears as you alter perspective.

This is the secret fundamental flaw with depth maps. Now you can subjectively look at the image above and deem it to be cool and otherwise amazing. That is all good and well, but the truth is that, compared with looking at the real ice, it is fundamentally lacking and does not depict what is seen when you look at the ice in real life.

So, people ask themselves if this is important and some will say yes and some will say no. And there are many examples where you could argue both points of view. I don’t have an argument with that. My position is only to point out that this flaw exists and it should not be ignored.

1 Comment

Filed under 1, 3D, 3D Photography, autostereoscopic, S3D, stereopsis, stereovision

A Copy Of A Recent Post To CML-3D News Group


For non-subscribers to the CML-3D newsgroup, you might find this post I made interesting:

_________________________________________

If people are of the mindset that 3D is a visual effect, then it is easy to accept that conversion is a viable process just as any other visual effect can be “created” with software and processing hardware.  It is absolutely true that content can be processed and made viewable with the perception of depth and that within the context of a visual effect – results can even be quite impressive and evoke ahhs and wows.  3D as a visual effect or add-on seems to be the primary interpretation as it is being used and implemented. It is viewed as a value-add and something which enhances the viewing experience. Marketing even fosters this idea about 3D and attempts have been made to jack it up even more with 4D and 5D. Pretty soon you might see D-Max and Awesome-D. Maybe Beyond-D and infinity-D.In my opinion, motion pictures, and still photographs are interpreted referentially. Since the beginning, I am not aware of anyone looking at a photograph or motion picture that has gotten confused as to whether or not what they are looking at is real. We all understand that the images we are looking at are references to reality. They occupy a different space than we do and require interpretation. Storytelling, by definition, refers to something in the past. A story is something to interpret and relate to based upon life’s experiences. The more we relate to a story or image, typically, the better we like it. Perhaps it embodies our fantasies or teaches us something that we value. Sometimes there are surprises as the story unfolds and certainly we enjoy humor.

My point is that, given the status quo, 3D is something that producers are trying to fit into the existing paradigm as an enhancement. But stereovision is something that most of us use every day to perceive reality. To experience life as it happens and to be in the moment and occupy the same space as the things that we are looking at. This physicality or realness has a much tighter relationship with our emotional self because real things can affect us directly.  The potential of 3D imagery to be transcendent and blur the boundary between referential and experiential is something that I find intriguing. As long as we have a one-size-fits-all approach to 3D, I think it will be very difficult to make this transition. I don’t think artificial stereoscopic constructs can do this effectively because there are too many perception conflicts and specular errors. Binocular rivalry is something that gives our brain information it needs to construct texture information, among other things. And make no mistake, it is our brain that creates an image with the depiction of depth and space based upon our human experience up to that point. Creating dimensional space from flat imagery is not the same as capturing two unique perspectives and the way the light enters each lens and imaging sensor through separate and distinct pathways. In many cases, the creation can be quite good – but in many cases it can be quite bad. Take snow on a sunny day, for example or a waterfall with its infinitely complex optical distortions.  How about heat rising from a fire? You can create imagery with the perception of depth for a single perspective image, but it will not be an exact match to capturing two distinct perspectives. Having the attitude that it is “good enough” doesn’t resonate with me in the same way ophthalmologists saying “amblyopia is no big deal” is absolutely ridiculous. It is a very big deal.

As we continue to monkey around with how we depict depth and space, it is appropriate to think beyond the limits of referential storytelling. Depicting space and depth can take us to new places and experiences that engage emotionally in completely new ways. How we perceive the world and our place in it can change. As content producers, we can do better and, in doing better, make a profound difference. Take a peek outside the box of referential imagery and you’ll see a whole new world open up.

-Almont Green

G. Almont Green
Multi-Perspective Artist
Almont Studios
5 Grapevine Way
Medway, MA 02053
t. 508-533-0333 / c. 978-853-0084AlmontGreen.com
Amped360.com
Amped3D.com
almontgreen.wordpress.com

Leave a comment

Filed under 1

Perspective Interpolation – Specularity and Refraction Problems


So, how about converting 2D to 3D or converting two perspective 3D into multi-perspective autostereoscopic… Technology certainly should easily make that possible, right?

The answer is a bit complicated. Because for some images it is quite possible to achieve excellent results. Unfortunately, for many images and scenes it truly is impossible to create accurate 3D from 2D and/or interpolate additional perspectives for autostereoscopic displays.

Case in point? Look at the animation below:

In the background painting there are tiny bits of highly reflective particles embedded in oil paint. These dots of light reflect bright points of light depending upon the perspective. They “come on” quickly as you change perspective because of the paint occlusion where you see them in one eye but not the other. Any program that interpolates views would not know what to do with a picture like this. Morph the dots of light? In real life, they don’t morph, they pop on with the light brightening as the perspective angle changes.

Now, take a close look at the glass gems. Notice how their specularity is influenced by the perspective position relative to the background?  Notice the refraction as you see the background through the transparent glass. Unless you modeled the gems in a 3D program and rendered them, there would be no way to interpolate with a pixel warping program what is going on with the look and texture of these gems as they change perspective.

What happens typically with a conversion is an abysmal mess for items with specularity and refraction. It looks 3D for sure – but in no way is representative of reality. And this is the conundrum. There is no uniformity or consistency with regards to 2D to 3D conversions or 2 perspective to multi perspective conversions. It is completely content based and the results are dependent upon the subject matter.

Binocular disparity and as this example demonstrates, binocular rivalry where one perspective contains elements not visible in the other perspective create monumental problems for conversion.

The solution? Shoot multiple perspectives. And this is the path that I have been forced to take to create consistent and uniform results. Indeed, fewer than 10 perspectives does not yield quality, uniform results in my humble opinion. Can fewer than 10 perspectives work? The answer is yes if what you are photographing has no specularity or refraction properties and the texture is smooth and uniform. But as an artist, I find that restriction way to limiting and live in a world that consists mostly of refractive material (water) and glass and gems and metals. Indeed, just look around and the world is filled with specular and refractive content.

Even portraits pose a problem because unless the person has extremely dry eyes, they glisten as the moisture that coats the eye creates specularity and refraction. Of course, if you don’t have a close up or reduce the resolution then it isn’t that noticeable. But here again, as an artist I find that too limiting.

I do not understand the willingness of people to ignore these problems. While it is true that in many cases specularity and refraction are subtle and nuanced. But given that 3D mimics the way we see real life, shouldn’t 3D be subtle and nuanced? Perhaps the gross over emphasized poke you in the eye effects are doing the potential of 3D a disservice?

That’s my view. But what do I know?

3 Comments

Filed under 3D, 3D Photography, autostereoscopic, S3D, stereopsis