Recently, after watching people at tradeshows look at my multiperspective photographs I noticed that a couple of people (2 out of a few hundred) complained and appeared confused by what they saw; they didn’t want to get too close to the photographs. I was able to talk with one person about it, and she said that the picture moved around and she couldn’t “see” it right. She said, “I just can’t look at 3D stuff.” Further discussion revealed that she didn’t think there was anything wrong with her vision – there just was something inherently wrong with “those 3D movies and pictures”. She was willing to dismiss that others could easily see 3D and implied that there was something wrong with them. I wanted to videotape her response to my questions but she adamantly refused that request!
I was fascinated. And I thought back to discussions I’ve had with various professionals about the power and bias of human perception. I felt this woman could be an example of how rooted or vested a person can be in how they perceive the world — to the exclusion of contradictory evidence. She had no desire to even consider that something could be wrong with her – it was the 3D stuff that was wrong.
I wonder why? Was it just her personality or something more primal? Is what we see and how we see given such prominence by the brain that it controls our thoughts about the world around us? If that is true then virtual reality and 3D immersive games and extended viewing of stereoscopic content could have some sort of impact in changing the perception of the world by those that engage in those activities for extended periods. It also implies that there could be a lot of people out there with vision problems that never seek treatment because they don’t believe anything is wrong with them.
I’ve come to learn that stereovision tests are not given a very high priority by pediatricians and opthamologists. If it is true that stereovision is a major contributor to how humans perceive the world they live in, then that is not a trivial medical ommission. Perhaps simple autostereoscopic photographs might become a useful tool for medical people to become aware of a “red flag” in terms of a lack of good stereopsis or poor convergence or…? I’m very keen to learn more and I’ve asked Dominick M. Maino, OD, MEd, FAAO, FCOVD-A to consider some of my ideas and evaluate and help me experiment with autostereoscopic multiperspective photographs for this purpose. I hope to have some samples for him in May and also some additional anecdotal observations I continue to make while watching people look at my 3D photographs. With their multiperspectives and dynamic attributes (when you move closer or farther away from them) it seems to me that they might have great potential for providing evaluation information in an informal exam setting. I have zero medical background and will rely on the advice and guidance of professionals.
I hope that many vision professionals out there will help me to understand this better and to perhaps create some useful and cost effective tools for vision evaluation.
This footage was shot with my iPhone and composited into a High Definition Movie using Adobe After Effects. No special image processing or enhancements were done and I am very impressed with the quality of the iPhone! I just captured what people said with no preparation, just their off-the-cuff remarks.
Come visit us at the “It’s a baby & family expo” at the Boston Bayside Expo Center this weekend (10AM – 5PM on Saturday and 11AM – 5PM on Sunday March 20-21, 2010) where we will be showing our latest multiperspective 3D life size photographs of adorable babies. We will have many show specials and special discounts for attendees. Info on the show is available at www.itsababyexpo.com
Yesterday, we did a very small baby show event at a hotel as a trial run to gauge the level of enthusiasm for our photographic technology. We were very pleased with how people reacted to our special photographs. Indeed, the level of excitement was so high that I was convinced we needed to exhibit at a larger show and therefore booked the expo in Boston.
As part of the expo in Boston, we will be asking people to explain what they see on our webcam. We will edit and upload the video to our website after the show for people to watch and vote for the best explanation of what they see. Surprisingly, I’ve found a broad range of reaction to the baby photos and think it will be very interesting to evaluate all of the responses at a tradeshow setting.
If you have ideas for other questions please comment below. So far, I plan to only ask people to describe what they see when they look at the photograph. Here is an example of what someone said at the show event yesterday:
I expect that not everyone will have a glowing reaction to the photographs. I’m really keen to hear what stereoblind people have to say about them and also people who perhaps have some vision difficulties with regards to fusion. Perhaps some just might not like them because they are 3D and don’t think photos should be in 3D. Reminds me of people who argued that movies shouldn’t have sound. Somewhere there is always a critic with a different point of view. Surprisingly, to date everyone has had positive things to say.
I’m looking forward to your comments regarding questions I should ask. Thanks!
I came across this link:
3D Hurts Your Brain
to a “news” article titled “Science Proves 3-D Movies Hurt Your Brain” and am compelled to comment. How they go from the UC Berkeley study that says 3D movies CAN cause eyestrain along with headaches, to a headline that shouts: 3D MOVIES HURT YOUR BRAIN — is nutty journalism. It is as if “3D movies” are all one thing. That just isn’t the case at all.
If I told you to hold a pencil six inches away from your nose and stare at the eraser for an hour — guess what? That could cause eyestrain. And if a 3D movie has copious amounts of negative parallax (stuff coming out of the screen) and your eyes are trying to focus on the screen plane but converge much closer, then YES it causes many people to experience eyestrain. If the scene has a lot of camera shake for those crash and burn scenes your brain will sense motion with your binocular vision but your inner ear will tell the brain you are sitting still. Conflicts of this nature create motion sickness. It is no different than spinning on a merry-go-round where your inner ear has conflicting information to other perceptions as you spin around.
It would have been more productive for the article to state that “poorly implemented 3D camera work can cause eyestrain and motion sickness in the cinema – especially if you sit close to the screen”. Of course, that wouldn’t be a sensational headline. Accurate perhaps, but not sensational.
I think if any alarm bells should be going off, it shouldn’t be about movies. It should be about video games where kids spend hours and hours in front of a computer monitor – soon to be 3D monitor, where perception learning and development can be effected. Again, use the stare at the pencil eraser example. Any atypical eye focus and direction for prolonged periods can’t be good. It is a type of vision therapy which is different from normal “view the world” seeing. Stare at a pencil eraser for hours at a time, day after day and it doesn’t take a brain surgeon to figure out that you might create problems looking at things in the distance. You are “teaching” your eye/brain to constantly focus on a pencil eraser 6″ from your nose, creating a preference for that type of viewing. The eye/brain connection in this case would merely be trying to adapt to a vision requirement.
In my opinion, 3D content will continue to improve and people will come to understand there is a difference between quality 3D and poor 3D. Avatar was certainly much better technically than any other 3D movie to date. Can it be better? Yes! Could it be worse? Yes! There are many new technologies for 3D currently in the lab that will be amazing when they eventually come to market in the next five to ten years. Unfortunately, right now there is a bum’s rush to capitalize on Avatar’s success and there are all sorts of companies popping up claiming to do 2D to 3D movie conversions. That means there is going to be a lot of 3D crap coming to market.
3D is art and science, not a commodity or ingredient to be added on demand.
High dynamic range or HDR photography extends the number of luminance values in a photograph. What this means in layman’s terms is that dark parts of an image still have detail and don’t turn into a black blob of ink and bright parts of an image don’t blast to white. This is essential for 3D multiperspective photography because the most compelling photographs present to the eyes in the same way real life is presented to the eyes. Our eyes adapt instantly to changing light conditions and we see an amazing tonal range that extends far beyond a regular photograph with standard dynamic range. We see many subtle shades of darkness and white sand in the bright sun still has sand grain detail that is easily visible although we might have to squint. The dynamic range of our eyes greatly exceed the capability of cameras – both film and digital.
Take a look at this photograph as an example. Note on the right side standard dynamic range photo how the black fabric turns to black and the hair ribbon blasts to white. A considerable amount of detail is lost making it difficult and even impossible for some parts of the image to have clearly defined perspectives. (See blue circle blow up for detail)
One might argue that it is better for flat single perspective photographs as well. And for some that is true, however, often times a photographer wants to simplify the photograph or add dramatic lighting and these subtle changes in luminance (brightness) are less important. Also, with a flat photograph more detail in the background can conflict with the main area of interest in the photograph. Without dimensional depth to set it out, too much detail can be undesirable for a flat single perspective photo. With the example shown here one could argue that the detail of the black fabric takes away slightly from the baby. But in 3D there is a world of difference that has to be seen to be appreciated. That extra dynamic range provides the detail in 3D to clearly position the baby in it’s space and the stereovision perception is greatly enhanced. It looks far more realistic.
So, how do you get more dynamic range out of a camera? In my case, I have imaging sensors and special processing that extend the image data captured to 18 bits per color (red, green and blue)**. A regular camera with jpeg output is limited to 8 bits per color. Those extra bits I am able to obtain with my custom rig contain subtle changes in luminance levels that can be processed and printed to appear similar to the way the eyes would see them in real life.
Another way (actually the way most people do it) to make an HDR photograph is to take 3 or more photographs in quick succession with a bracketing camera option where the shutter speed is different for each photograph. Then using special HDR software they combine the different exposures into a single photograph with a broad tonal range. Sometimes the effects of this processing is very effective. For example, room interiors with windows look much more natural. Like anything though, it can be overdone and create very unnatural looking images. A big problem with this approach is that you are limited to shots that have no movement during the multiple exposures. In order for me to take action shots, I had to create a system that captures all of the data at once, at shutter speeds in the hundreths of a second range.
** A regular Canon Camera RAW image has 16 bits per color. I do use Canon sensors but I have been able to tweek things to get an extra couple of bits per color at the expense of error correction, which I must perform in a separate process with a computer. It will remain a secret how I do it, unless and until I am able to obtain a patent for the process. If you really have sharp eyes, you might also be able to detect that I’ve reduced color fringing and ringing around sharp luminance transitions. That’s another benefit to extra bits.