It sounds like a great idea to utilize a depth map to extract information and control the depth depicted in an image or series of images. It sounds great for converting a 2D image into a 3D image. It sounds like a great tool for plenoptic cameras to interpolate the data into imagery with depth. Alpha channels are great to use for transparency mapping – so a depth map should be equally useful, shouldn’t it?
Take a look at this depth map:
This is a depth map created from a plenoptic camera shot of a bunch of ice bits. It is a grayscale image with 256 shades of gray to depict the parts of the ice that are closer to the camera and the parts of the ice that are farther away from the camera. This information is used to adjust the depth of those bits that are closer and farther away by stretching or compressing pixels.
Now check out a rocking animation that uses motion parallax to depict the depth (items closer to you appear to move differently than items that are farther away).
Right away you can notice a few errors in the depth map, and for complex images this is typical and can be edited and “corrected”. But there is something else. Take a close look at the parts of the image where the depth map is seemingly correct. Sure, you can see the depth but does it really look like ice? If you are like me, the answer is no. Ice reflects and scatters light in a way that is unique for each perspective. Indeed, there IS binocular rivalry where one eye sees light reflection and distortion that is not present in the other eye’s perspective. This disparity tells us something about the texture and makeup of what we are looking at. Stretching or compressing pixels eliminates this information and only provides depth cues relating to the spatial position of things. For most people, I suspect it is reasonable to assume that this creates a perception conflict in their brains. There is something perceptually wrong with the image above. It does not look like ice because the light coming off of the two perspectives looks the same. A depth map does not provide information regarding binocular rivalry and creates errors as a result. Errors that can’t be fixed. Herein you see the flaw in using a depth map. It throws away all of the binocular rivalry information. In other words, it throws away the information between perspectives that is different.
In my opinion, depth maps take the life out of an image. It removes important texture information which, I believe, is gleaned from how light shifts and changes and appears and disappears as you alter perspective.
This is the secret fundamental flaw with depth maps. Now you can subjectively look at the image above and deem it to be cool and otherwise amazing. That is all good and well, but the truth is that, compared with looking at the real ice, it is fundamentally lacking and does not depict what is seen when you look at the ice in real life.
So, people ask themselves if this is important and some will say yes and some will say no. And there are many examples where you could argue both points of view. I don’t have an argument with that. My position is only to point out that this flaw exists and it should not be ignored.