3D: Who is doing what?

Technology information and links will be posted here regarding such things as plenoptic camera technology, micro dot lenticular lenses (lenticular in this case is a misnomer), neuroscience research regarding 3D perception and what goes on in the brain, psychological research on reactions to 3D imagery, testing and evaluation of 3D photographic techniques and image enhancement/alteration, the effect of high dynamic range photography on 3D autostereoscopic imagery and other new and interesting information related to the explosive growth in multidimensional imagery and presentation media.

Some people have asked me about holograms and if that is what I’m doing. I am not doing holograms. They have a lot more limitations with regards to their creation. Live action photography on location just wouldn’t be feasible.  Check out this video from YouTube on how a hologram is made:

With regard to others who are taking professional live action 3D photographs there is someone in Australia who I have had a brief series of emails with. His name is Mark Ruff and his work (based upon what I have seen on his website) appears likely to be impressive.  He takes a different approach to mine, choosing to interpolate inbetween image perspectives with fewer cameras and he typically uses a lower line per inch lenticular lens (do not interpret these comments as positive or negative, just different) and shoots portraits on green screen so as to composite backgrounds (again, not meant to be positive or negative just different). His timeslice camera array is particularly impressive as he has developed very sophisticated camera synchronization technology. Check out his website at http://www.timesplice.com.au

So far, he is the only other person besides myself that I have found doing serious professional multi-perspective live action autostereoscopic photography. There are a few people using single camera on a rail to take multi-perspective shots and a few people with consumer grade cameras doing some things but I haven’t found any others doing live action professional autostereoscopic work. If you know of them – or you are doing it – please let me know!

51 responses to “3D: Who is doing what?

  1. You’d think there would be more live action shooters out there. This software from Breeze Systems has been around for some time; http://www.breezesys.com/MultiCamera/index.htm.

  2. The Breeze System does not provide high enough shutter synchronization accuracy for multi perspective photographs. It is designed for video capture effects ala the Matrix bullet time effect.

    There are a few 5 camera and 9 camera people out there. But I am the only one with 12 cameras at 21 megapixels per camera. This is a very hard thing to achieve and to make work reliably and accurately. And the photography is only 1/2 the work. Matching it to a suitable high resolution lenticular lens that results in a photographic quality (up close and not from 20′ as most require) look is extraordinarly difficult and something that I have yet to see from anyone other than myself. It might be out there, but I haven’t seen it yet. It is very difficult to lay down 20 interlaced images onto a 150 line per inch lenticular lens. If someone has found an easy solution, I’d be keen to learn about it.

    • Jack Fleming


      I am Technical Director of LPC-World (which sells Lenticular lens materials worldwide). I just read several of your blogs and find your pursuit interesting, and your 3D artistic perceptions well thought out. There are perhaps a dozen people on earth who think deeply about such elements of 3D photography for the purpose of developing equipment and processes to make it possible to capture realistic parallax frame series to use for Lenticular prints with ‘natural’ feeling (like looking thru a window).

      I have been shooting 3D for over 24 years, beginning with a Nishika camera (hobby), then progressing to 70mm Bronica mounted on an Autotrac (purchased from Lentec in the early 1990s), then 12 megapixel digital on the track system. I worked in the mid 90s with Dr. Shougun Pan using multi-lens cameras he designed and built (up to 24 lenses exposing a single contiguous film strip). Cameras were built both with and without convergence features, and we built and tested matching multi-lens projectors and liquid luminescent projection screens with incredible autostereoscopic results. The problem with those systems was that convergence was required, which limited everything to studio sets of fixed key plane. We also performed Lenticular testing using my Hell scanner and proprietary interlacing software commissioned by Pan. We worked with scanning cameras that Pan designed and built based on Maurice Bonnet cameras, and one built by Mike Kay’s father (Bonnet’s used adjustable convergence and scan distance with synchronized barrier grid on the film plane, the later a fixed mount with horizontal moving aperture synchronized with moving barrier grid on the film plane). The Bonnet camera could shoot any scene, and the Kay camera had limitations suitable only to studio work.

      I have a friend in London who owns 3 of the 6 Bonnet prism cameras ever built (single lens about 16 inches diameter with 24 erecting prisms in between the lens elements, exposing film thru a Lenticular lens in vacuum contact with E6 transparency film). That camera produced the most spectacular Lenticular images I’ve ever seen, but again the camera was built with fixed focus and vergence so it is truely limited to studio work at a fixed distance to the ‘key’.

      The Bonnet scanning cameras were used by Bonnet Studios in Paris, Harvey Prevor, Hesse Studios (and many others), are capable of shooting any kind of scene, but camera setup and scan/exposure time presents challenges. Dr. Pan built a new scanning camera system in 1996, and I worked with it for over a year trying to bring together the materials and operational procedures that would have enabled commercial sales to portrait studios. It was abandoned when Pantech closed shop amidst financial arguments with Chinese investors.

      I have photos of several historical multi-lens cameras, including a 13 lens Topan camera, several other 35 and 70 mm cameras, and the lens from a Bonnet prism camera I’m willing to share with you if you are interested. Just send me an Email.

      I also think you don’t need to use 150 LPI lens sheets to get a spectacular result. If you want to view beautiful 3D images as close as 12 inches, then 100 LPI 3D lens is more than sufficient for anyone except the 6 people alive that insist on something ‘more perfect’ (I know two of them) . 75 LPI 3D lens is also exceptional, and provides greater viewing distance/range capabilities. The more course the lens, the greater the viewing distance capability, and unfortunately we cannot defeat the physics of this and settle on only a couple of lenses to choose from. Viewing distance desired dictates lens choices.

      I’m willing to share some of my experience stitching together parallax frames and interlacing to help you with the difficulties using 36 or more frames. By the way, I assume you mount your 12 camera rig on a track to generate sequential tweeners? There is also software available from Human Eyes, which has essentially an ‘optical charactor recognition’ type algorithm built in to automaticall align and stitch together frames, then provides adjustability of the convergence and parallax disparity (if you shoot enough frames to use the ‘depth’ adjustment feature). It used to be very expensive, but I think they now sell a module for that for under $400. I’ve also got a great free interlacing program available for anyone that needs one without pixel width or size limitations (Superflip).

      Write any time, I’ve got a lot to share.

    • Dear Almont. I have been doing this for the last 18 years now. Synchronised cameras via film, as its still the easiest way to achieve it, unless you invest in a hugely expensive rig like your doing. Hence I still run a Dianippon drum scanner and have my own E6 processing line. I use Kodak LVTs and a Lightjet 5000 for output. I photographed the dam sluce gates opening on a multi lens film camera at Cardiff Bay in 2001, and a live rugby match in the millenium stadium. My Statues at the Stad Olympico were done with a single Hassleblad taken at 3 positions and then used morphing techniques to give accurate perspective in-betweeners to 30 frames. The technique I developed took a year to get really good quality, but I seldom use it now. Most of my work is now printing other people’s work – animations and 3D, artists editions, and I use the excellent system by SoliDDD, which is equal, if not superior, to any of Harvey Prever’s Bonnet shots assuming you can get the digital capture information in to the interlace. Do you know of Les Nakashima? He had a Bonnet camera built from scratch, which he sold/gave to me. Sadly, it all came during the financial crash of 2008 and I wasn’t able to bring it to the UK, and due to its size, able to find space in my limited studio at the time. Now I can! But there you go. 11 x 14 inch film has around 4 gigabytes of useful capture resolution so that is one of the reasons why the Bonnet camera system was so good – give or take a bit on the optics of course. Nakashima does some excellent lenticular photography. Yes I have seen David Burder’s Bonnet lenses. They are beautiful things.

  3. David Burder, of 3DImages.com, London here. Yes- Jack Fleming certainly is a true expert who can gelp you. I still own those Prism Lenticular cameras. Take 8×10 inch sheet film. I have loads of that to rehome too. I also have othere multi lens lenticular cameras that I will be advertising for sale. These include 8, 9, 11 and 24 lens cameras! All film, not digital. And a couple of other 8×10 inch scanning cameras! Also have half a tonne of lenticular lenses to clear out. Keep in touch. David

    • Patty Ludwig

      Does anyone have any E6 transparency film, color in 8 x 10 or 11 x 14? We are also interested in any lenticular lenses to use with prints shot by Harvey Prevor, of Paul Hesse Studios with the Bonnet camera.

      • Hi Patty,

        I was in LA about a year ago, tried to contact you but could not. If you have a moment I’d be happy to hear from you.


      • Hi Patti, I work with John VL and can possibly help with your pursuit also. First thing to know is there are several types of Bonnet camera, including scanning cameras and prism lens camera. Some of the scanning cameras use a moving barrier grid synchronized with the horizontal camera movement, and I believe he also built some that exposed film thru a Lenticular lens. The prism camera exposes thru Lenticular lens @ about 62 lpi pitch. The challenge is to develop a method to reproduce the Prever/Bonnet images in such a way that allows scaling to match pitch of currently available lens, because no lens matches the pitch of the images exactly except the lens used in the exposure itself.

        I have a Linotype Hell S-3900 scanner in storage, possibly will be moved and re-activated early 2014 for the purpose of testing and developing the capability required to do such reproduction. I have two original Bonnet Prism camera 8 x 10 Trans plus another from a scanning camera via barrier screen for testing, so if you stay in touch I’ll keep you informed of progress.

    • Hi anyone who wants to get into playing with lenticulars. I am clearing out my old warehouse from over 25 years of Lenticular making- about one tonne of lenticular lenses, new but maybe dirty, dusty, some scratched, thin and thick, + lots of spiders. £1,500 the lot as is. No offers. To take everything. Need a lorry, and a day to load up.
      Need a small lock up garage to store it! Or break it up and sell it on.
      It includes a sealed pallet of 1,000 brand new wrapped thin lenses about 20×24 inches, about 80 lpi, about half mm thick. Buyer to collect from London lock up garage. DavidBurder@googlemail.com.

  4. I recently purchased a Bonnet prism lens that was used by Paul Hesse and Harvey Prever built I believe in 1953, and wish to correct an inaccuracy in my previous post. The lens elements measure 12.625″ width, which would have been about 13 or 14 inches diameter before being cut rectangular to fit into the camera housing, and there are 20 prisms (not 24). I’m now working to engineer a method and apparatus to condense the image at the film plane to a capture with a 70 mm digital camera back with micro Lenticular screen. Loads of fun and a real challenge!

    • Any updates, Jack?
      I recently purchased and am modifying 36 – 24.3 Megapixel cameras with a precision shutter release system accurate to 1/2000th of a second to shoot large format lenticulars printed at true 2800 dpi for interlaced line accuracy at 1400 interlaced lines per inch (verified under microscope) laminated to 39+ lpi cast lens material finished size 24″ x 36″. I am working in the computer at 1:1 size pixel to ink jet dot(s) with quite interesting results. Huge files! Using proprietary vector based scaling algorithm to scale the image and interlacing at 1:1 ratio. Currently experimenting with interlacing ink colors (CMYK+) and interpolating 1 – inbetween view per camera for 72 views. Much experimenting going on… I showed one image to Tom Saville at Big 3D in Fresno, CA and he said: “Your work is amazing.” My goal is 144 perspectives!

  5. Almont,
    You are certainly pushing the envelope beyond what anyone in the industry is doing, and I admire your dedication to ‘supreme quality’. Henry Clement in Paris has been producing large format, typically from 40 parallax frames and output on a Durst Lambda and Symbolic Sciences system for many years, and his quality keeps him busy selling high profile jobs. Perhaps he’s changed output equipment since our last communication. He’s also doing some interesting software/hardware development; worth investigating his website, and perhaps Emailing him directly.

    I’m interested to learn how you generate ‘tweeners’.

    Early this year I consulted Richard Kendall as he was working on some projects in China shooting large Lenticular scenes for Range Rover using a huge time splice rig with 96 cameras. The obvious problem is distance between cameras and lens selection to capture proper parallax based on the scene and distance from film plane. He bought 96 telephoto lenses to enable camera distance and depth of field required for the job. He wanted to buy zoom lenses to allow variability, which I “strongly” recommended against because there is no way to set all lenses to exactly the same scale for all frames to fit each other perfectly, and scaling in Photoshop would be a nightmare. The final results were reported to be very good, but I believe the juice wasn’t worth the squeeze.

    Some folks in Russia developed interesting software to generate tweeners, but it seemed very difficult to use, and I have not had time to investigate it more deeply. They also developed software called Multi-Stereobase, which seems useful to determine parallax separation when using a camera track. You input camera/lens and scene parameters and it recommends proper settings.

    I’ve pondered a marriage of both worlds, by construction of a very large camera track to hold multiple high speed cameras, and use it to shoot perhaps 4 or 5 frames per camera within .5 second as the cameras move thereby creating the number of frames desired with proper separation. Perhaps this would be your best solution?

    • I use proprietary software to assign vectors and then move each region 1/2 the distance between the two point locations. Of course, tens of thousands of vectors are identified and it is time intensive. This method is not stellar and there are always errors which slow things down. Regarding Russian software, are you talking about STereoMorpher? That is a pretty good program that I have experimented with. Still, generating more than one view per perspective is not something I’m very impressed with for many types of subject matter, like water and other optical transforming elements (reflective metals and other binocular rivalry generating specular light reflections (think snow on a sunny day). Given the quality of the 36 cameras spaced three inches apart with telephoto lenses I have pretty smooth perspective to perspective shift up to about 9″ of true perceived depth before cross talk becomes noticable. With approximately 40 LPI lens material I think 36 perspectives is a good compromise for most things I am doing. The biggest thing I’m excited about is the shutter synchronization which I have achieved: 1/2000th of a second across 36 cameras! This really offers amazing creative possibilities to suspend particulates within an image and have very true, authentic realism with proper size and depth depicted in a high resolution scene (up to about 9 inches anyway). Plus, I’ve streamlined a lot of things and hope to dramatically lower cost over the coming days. That is going to be the key to my success and I’m getting much closer to a very big announcement.

      Thanks for taking the time to post here. By the way, I looked at some Martin Hausler Raytrix camera images at Big 3D. Very interesting! Not for me, but I see how the tech has possibilities once more resolution becomes available (think 400 megapixel sensor).

  6. I am no longer sure the place you are getting your info, but great topic.
    I needs to spend some time finding out much more or figuring
    out more. Thank you for fantastic information I used to be searching for this information for my

  7. Pingback: 4 best 3D photo blog pages | World of The X

  8. One of the many great articles here! I always wondered which of the 3D technology is the best one and I still think it is very hard to decide. There are many great new 3D technologies, but at the end of the day many people are just not able to use them, or the just don’t own the right piece of hardware. So maybe the old good anaglyph is a good choice as almost anybody can see it.

  9. I also take multi cam glasses free photos to be printed glasses free. See Illuminessence.co.UK & 3dfineartnude.com As I do motorsport, wildlife & fine art nude I can’t use a rail & all 7 cameras are synchd to <1milisec.

    Glad to find another advocate!


    • Nice! Thanks for posting info. What are you using for cameras? Are you interpolating views from the seven “prime” perspectives? What size prints are you offering? In other words, tell us more – and leave off the hyperbole ;^) I encourage honesty and sharing the difficulties involved in producing quality multi perspective. Have you done any work with integral lens material yet? I’m close to making the plunge (with the next sensor upgrade to 200 megapixel) to plenoptic. Hopefully, sometime next year.

      • 7x Canon SX95is tested to avg of 0.29msec between all 7. I shoot all 7 to ensure the parallax is enough, then find the 1/30 pair e.g. 1&6, then interpolate as many ‘tweens’ as I need using TwixtorPro (RE:Vision) http://www.revisionfx.com/products/twixtor/ in AfterEffects. I’ve tried YUVSoft, Timewarp, Kronos (The Foundry in NukeX), etc. RE:Vision also do auto colour & exposure matching plug-ins so I end up with nearly perfect results. I had a 8x 450D rig but it was so heavy & unwieldy I sold it. I need to be able to pick mine up with one hand & walk around as I use it for wildlife, etc. Twixtor’s ability to create a vector for each pixel, tracking points for fiddly subjects (sharp edges with occlusion, etc) & use mattes, gives it 10/10 for me as it can even cope with the detail round feathers of a hawk in flight! If you want to see & hear my Royal Photo Soc talk (attended by the great David Burder) then click here http://www.holography.co.uk/WW/WillWatling.htm

        I print the finished up to 1mx0.7m 20LPI 3D but most are A3+ 40LPI3D Microlens.

        Plenoptic cameras are definitely the future for portable rigs, & Adobe’s already completed a plenoptic format file reader & post plug in see http://youtu.be/lcwm4yaom4w The Lytro’s quite fun, but hopeless to use with it’s small touch screen. However, even the mighty http://www.raytrix.de/index.php/r29-273.html doesn’t solve the problem of unlimited disparity as they still have limited parallax. My rig has thumb tightened brackets for each camera, so I can change the IOD from 6.5cm to 15cm between each shot if I want. i.e. I photograph a beautiful classic car from 5m away with a relatively shallow background using 6.5cm IOD, then the next shot might be a steam train coming in to a platform e.g. http://www.illuminessence.co.uk/page60.html with 10cm IOD.

        I therefore find the 7x cam rig more than adequate for 90% of occasions & choose the # of views & disparity settings afterwards to fit the LPI & size of screen. I also format 8tile format multiview files for my 42″ Newsight autostereoscopic screen, which still amazes people. It’s great for computer controlled slide shows in galleries as people can see lots of diff photos in glasses free 3D without me needing to be there, then order the ones they want as lenticulars 🙂 I got lucky & picked it up for £99!

        Hope that helps.

  10. I have plenty of 8×10 and 11×14 inch Bonnet lenses here. Plus 3 Bonnet cameras and Images. Also a freezer full of 8×10 inch E6 Transparency film, frozen outdated. Still have a lock-up garage containing over one ton of lenticular screns for sale as a job lot. DavidBurder@googlemail.com

  11. Patty

    Sandy, Harvey Prever’s daughter, can be contacted at harveyprever@icloud.com. We are interested in getting Harvey’s transparencies out of storage and putting together a collection of his 3d work. We have much of his 3D studio equiptment in storage. We would be interest in E6 transparency film, the lenticular screens, news ways to scan his original images etc. Patty

    • I own a Hell S-3900 drum scanner capable of up to 18,540 LPI scan resolution, which I calculate is sufficient to scan originals and capture all of the parallax detail. The challenge is to devise a repeatable process for scanning, scaling, and output to fit the pitch of various available lens materials, as I hope to be able to enlarge to fit 10 and 20 LPI 3D lens for poster size reproductions, as well as a close pitch match to the original Bonnet lens (perhaps 60 LPI 3D). I plan to get the scanner out of storage before the snow flies this year, and begin testing using a few originals I have from a Bonnet Prism camera and another from a barrier screen scanning camera.

      • Good luck with that! By the way, I’m putting together a traveling road show for next year of theme’d multi perspective imagery (large format, high end). Let me know if you have any interest in participating with content. Images would be on display AND there would be images for sale. Really looking for “edgy”, “creepy”, “scary”, etc. For the most part, “real” as opposed to abstract. All persons interested should email me for more information at ag [at] almontgreen [dot] com

  12. Albert Maly-Motta

    i just had a fascinating hour with this blog! I’m far from having read all of it but here I find people who have serious thoughts about 3d.
    I have started 3d photography as a youngster in 1974. Since then I have done 3d sequential photography with one camera, then progressed to the Stereo Realist and a custom made full 35 mm format 3d still camera.
    Lenticular has always been a fascination and I have developed a stereo adapter system using a lenticular screen in front of an LCD monitor. This led me to shoot lenticular material with a slider system movin the camera in a circular path.
    I was spellbound by Maurice Bonnets lenticular images which are the finest I have seen do far. The photo museum in CHALON-SUR-SAONE in France has the original Bonnet cameras, and I have been allowed to examine them in the archives, as well as seeing some pictures in their original form with and without the lens screen attached.
    A similar quality has simply not been achieved in the digital era. Bonnet used super high resolution Ektachrome flat film and a CONTINUOUS process where he moved the camera with the shutter open and displaced or tilted the master lenticular array in front of the film. This gave him an almost unlimited number of views. And he used the original reversal film so he wasnt sending the images thru the bottleneck of a printer!
    His other camera for taking instantaneous pictures used a prism array to reverse the perspectives so that he would not wind up with a pseudoscopic image. All these things have been thought out with incredible precision, as have his engraving machines for his custom lens sheets. I think his work needs a lot of research as there might be gems hidden to be discovered here.
    Yesterday I went to a 3d congress here in Munich and everybody lamented the lack of punch or the decline in 3d that seems to indicate that 3d is coming to a low point again. What struck me was the fact that there was no fundamental progress in display and recording technology in the last years. Screens get higher resolution, but we still have the basic tricks that were developed a hundred years ago.
    Your AMPED process has rediscovered the fact that lenticular works best in closeups with little or no magnification and limited depth range. For landscape shots or anything with extended depth lenticular just doesnt make it. This is because your infinity points are too close together and you can’t get the images further apart before you start overwriting the other images or you begin to go under neighbouring lenses. No matter what you do ALL your images are on the same substrate and will start to get into each others way.
    With a 2 view slide you have different substrates for each eye and you can freely move your close and infinity points. I agree its certainly a great process for tatoos because it shows the roundness of the curved body parts the images sit on.
    I have tried interpolation to get in between images – for instance to get 18 views out of a 9 view shot. For this I used “motionPerfect” a slow motion tool.I simply made an AVI movie of my 9 views, this could then be stabilized in AfterEffects to get a different convergence point, and set the slowmotion to a factor of 2 which gave me 18 views. In direct comparison this shows that the increased number of views gives a much smoother transition between views and also an antialiasing effect. The perceived resolution of the image does not change on the monitor. On a printed view this might be different.
    There is a lot to discuss and I would like to participate!
    All the best

    Albert Maly-Motta

    • Wow, thanks for that contribution. Yes, there is a lot to discuss and thanks for participating. I was about to write an article about different mindsets with regards to imaging. For example, trying to accurately document something vs. creating an expressive storytelling image. These are all good and well, but the heart of things is much deeper. It starts with the fundamental examination of the perception of space and the space between things. We see and take it for granted that we see the space between things (most of us anyway). But the truth is we don’t see it in the way that we think we do. The photons that exist in the real world interact with our receptors and the receptors convert that into bits of data that travel through nerves and electrical signal pathways into processing units within the brain that figure out ways to interpret the data and stream some of that to our unconscious mind, and some of it to our conscious mind and to other places that go beyond my pay grade to describe. It is an interpretation that develops and changes as we age and gain experience. As our physiology changes. As our environment changes. As the things that we look at change. The interpretation of space as we perceive it has plasticity. It isn’t a fixed thing as so many would want us to believe.

      My point here is to examine your last point about interpolating perspectives. This is essentially creating something that doesn’t exist and presenting it in a familiar way so as to be rendered by our brain in a predictable way. To trick or fool our receptors into seeing something that is familiar enough so as to be interpreted in a similar way to the real thing. This one issue, a single paragraph or sentence in a marketing piece could represent a decade of study to determine all of the issues related to the perception of derived imagery and how that interacts with learned perceptions of real imagery. It begs the question “what is real” since what we see in our brain isn’t real. We use our perceptions as part of living our life. If living our life changes, say we end up living on a spaceship traveling for most of our lives, then an argument could be made that it is highly desirable to alter the data we send to our brain to perceive the world around us.

      These people that make proclamations about the right and wrong way to do things need to lighten up and think about imagery as art. Our visual perceptions are experiential, referential, emotional, fearful, joyful, anticipatory, etc. etc. There are as many definitions as there are ways to perceive the world around us. There is room for many approaches. And all imagery is an illusion that invokes a perception by the brain as to how it is interpreted. Everyone’s brain is separate, but also connected. We have tremendous empathy and empathy capabilities with mirror neurons. We can interpret what other people are thinking and even feel their pain under some circumstances.

      So, yes, there is a lot to discuss and to participate. It is all so fascinating and presents a world of discovery that goes to places we can’t yet imagine. During a PBS documentary about Lewis and Clark someone said the days of exploration are over. I couldn’t believe my ears. “Over?” They have barely begun! We have but scratched the surface in the slightest of ways. Multi perspective imagery is a gateway to greater understanding of ourselves. Each time I look at a multi perspective image I see something I didn’t see before. That’s because a multi perspective image can be experienced in addition to interpreted. That’s pretty magical.

  13. Dear Albert & Almont, The comments on parallax interpolation above are interesting as I’ve also been developing techniques & using various tools to deliver controlled disparity. See ‘Timetravel’ on my website. http://www.illuminessence.co.uk/page57.html I’ve tried various tools, YUVSoft, NUKEX from The Foundry, but eventually settled on Twixtor Pro in After Effects CS6. It’s control with both tracking points & mattes is incredible & I get consistently great results. I can now wander around with my 7xcam rig & photograph almost anything as they’re all synched to <1 msec 7 then adjust disparity in post. I gave a lecture on it to the Royal Photo Soc in London in May. If you'd like you can see the video online here: http://www.holography.co.uk/WW/WillWatling.htm

    I agree with Almont's comment that we've only just started to explore & with 4K autostereoscopic display now readily available, I'm hopeful we'll see another surge forward in popularity of 3D. There's a lot more content around now than 3 yrs ago when the old active / passive glasses TVs came out, as luckily, the movie studios have continued to produce 3D. Fingers Xsd.


  14. Albert Maly-Motta

    Hi Almont,
    I have always felt that 3D is a frontier …something to be explored. Seeing the first auto stereo 3d images form on my banal laptop screen when I overlaid the lenticular, made me feel like a pioneer….something the first guys who saw a tv image form in front of their rotating disc scanners must have felt.
    I think we must somehow think backwards from the way the eye is built.
    For instance the human eye has no problem with convergence because our retina is a SPHERICAL surface. If you converge a photo or video camera you get trapeze distortion because you project the images onto a FLAT sensor. Imagine all the processing that must be going on in our brains to make the “raw” image the eye delivers into what we perceive as “visual sense.”
    3d to me is all about immersion. I will never forget the first look into a stereoscope. You were exactly where the photographer stood a hundred and twenty years ago. Even though the image was black&white it made you go back in time much more than a flat image.
    Like so many techniques 3D should not call attention to the fact that it is there. Right now we are in the phase of early sound or early color movies.Screaming technicolor sunsets!
    Remember the early days of the Steadicam? People being followed up and down stairs, endless tracking shots of the little boy going thru the corridors of the Overlook hotel in Kubricks SHINING. You mused “how the hell did they do it?” Today, steadicam is everywhere and nobody thinks about how did they move the camera in this shot…
    3D is the same. There has been some progress about “soft” 3D which is not glaringly obvious. When its well done 3d makes the images from a movie stick in your brain a lot longer than a flat movie. AVATAR images kept coming back to me for days after I had seen the film. Initially I was not too overwhelmed by it, but the images kept coming back to me. I attributed this to the 3d technique.
    Lenticular has a great appeal to me because it does not “rape” the eyes the way 2 channel 3d does. In 2 channel systems your eyes are at the mercy of the stereographer. If there is an error in height or convergence, your eyes are FORCED to follow. In auto stereo 3d the visual sense is stimulated in a different way. the filter is in front of the image not in front of the eyes and so you have more of a choice….
    Nonetheless I have almost abandoned auto stereo because I find it so limited. Depth is limited, resolution is a big problem and the number of people in front of a screen is limited.
    This is why I think we must go back to Maurice Bonnet and his analog process. Somehow we must either expose the image thru the lenticular or we must find a way to scan it to a light sensitive surface thru the lenticular. Have you done experiments with a flatbed scanner and a lenticular? I laid a 40 lpi lens on the flatbed scanner, put an object onto it and let it scan. if you print that in reverse you get an image if you put the same lenticular onto the paper printout. If its not reversed you get a pseudoscopic image.
    Somehow this experiement gives me the idea that the old Bonnet cameras could be revived by combining them with a flatbed scanner in place of the Ektachrome film.If the interlacing is done optically we can overcome the limitations of the digital processing…..lemme know what you think!

  15. Greetings everyone,

    I own a Bonnet prism lens and have the plans of the original camera. My objective is to eventually marry it with a large digital camera back (8 x 10, or 11 x 14 would be better) when the technology exists and price is not out of reach. A scanner type capture is only appropriate for still life, but film was capable of stopping motion, so we have some challenges ahead before this will be possible. Properly converging the Bonnet prisms is also something of a challenge, but I figured that out. I also reverse engineered the prism lens, so building new will be possible when all the technologies converge.

    Here’s a challenge for all of us. Let’s all work together to find a highly profitable use for such a system, and we’ll build the optimum system. I know the most brilliant physicist in this field (Oystien Mikkelsen), and if there is proper funding provided by investors we can bring the quality of Bonnet’s system to the current era.

    • You have my attention, but I’m at a loss to know what the “highly profitable use for such a system” would be. That should probably be the first sentence of the paragraph. There are so many new technologies at play now that doing this on a small scale is an incredibly risky enterprise and finding investor funding would be extraordinarily difficult. High precision tiny lens manufacture is now possible with nanotechnology. Computers are now fast enough to bring computational photography into the mainstream. Have you been to Barnes & Noble lately to see that you can buy an HD Nook for $125. running Android OS and an Android MiniPC for under $100. That’s a game changer.

      The other thing is the gigantic push towards do it yourself quality photography. Computers can now focus, control depth of field and even make compositional adjustments for the “stupid” user.

      Having said that, I am like you and believe that there is a lot more to be contributed. In the end, I don’t believe it is the equipment that creates art. A true artist can use the crudest of tools and create amazing works. I have ceased to be impressed reading a photographers portfolio of equipment. I judge based upon the work created. It is rare to see an excellent technical achievement that coincides with incredible artistic expression. Of course, beauty is in the eye of the beholder – but there are certain universalities to both.

      1,000 megapixel imaging sensors coupled to [fill in the blank] are amazing tools that present the opportunity to create amazing things. But that isn’t guaranteed. Amazing things that are profitable? Even more remote. With the advancements in tech going so quickly, we will soon see much better solutions to immersive imagery and illusions of reality.

      I do have some ideas. I am very interested to see examples and learn more – always! But showing something new and amazing now has the life span of an adult mayfly (about a day?). To have it catch on to the extent it would be profitable? That is very difficult albeit not impossible.

  16. Yes- I also have 2 of these cameras, since 1990s. Great Fun, great results, but yet to show an overall profit. David G Burder, FRPS.

    • Perhaps, as multi perspective photographers, we need to think about how the viewer can experience the photograph and through that experience desire it. It isn’t enough to be novel or thought provoking. A rock can accomplish that (remember the pet rock?). To present an image that shares the same space and can be experienced… I think that might hold the key. Unlock the emotion and let the brain freely perceive the image as real in some way. It is very difficult. Profit comes from desirability, want and need. Define who really needs multi perspective imagery and you are part way there. Present it as an immersive experience and you get closer. Make an emotional connection and get even closer. etc. Writing this makes me depressed because I am far from achieving these goals. But I am undaunted in my quest! The more I try, the better I believe I will get. At least, I’ve finally come off my high horse and have begun to realize how stupid I am and how much I have to learn. We let ego get in the way far too often. One thing the great masters had in common was wanting to try harder with the next work and to never be satisfied because it can always be done better.

  17. Albert Maly-Motta

    I can only second your thoughts about artistic perfection. an artist who is in love with his own work soon ceases to be one.
    Perhaps one of the ways to use multiview auto stereo would be abstract but three dimensional imagery. If you frame one of your tatoo shots in such a way that it is not immediately obvious as such it becomes an abstract image of something two dimensional mapped to a 3d surface-the body.
    Everyone sees 3d as hyper realistic so perhaps an interesting idea would be to brush it against the grain and do abstract images. I have seen beautiful tests where someone took a flashlight in a dark room, opened the camera shutter and drew patterns in space. These show up as glowing traces in 3d. A bit like the later slit scan experiments. Another friend of mine projected macro images of butterfly wings on a nude female body and photographed the resulting image in 3d. Very interesting semi abstract pictures where the 3d dimension gives you a totally different aspect from the 2d print.
    I would like to encourage you lucky owners of a Bonnet camera or parts of it to tell us more of how you came by these parts. As far as I know Bonnet sold only a few of the big cameras and probably none of the prism devices to aspiring 3d photographers. Do your devices come from the Bonnet family? Michele Bonnet has or had a fascinating web site about “la réliefographie” the Bonnet company. In French.
    I dont know if the museum in Chalon sur Saone has a Bonnet camera in its exhibition by now. Years ago they were very open about showing me the remains they had but they have sort of clamped down on that, at least that is my impression. Probably because they do not want to disturb the restoration process or want to publish themselves about the system.
    I dont believe research into the Bonnet process might result in a “commercially profitable” camera. Bonnets cameras were huge by necessity because they used these big Ektachrome plan film formats and a much larger stereo base than the usual 65 to 75 mms. I believe the prism block I had in my hands was at least 30 to 40 cms long. Since one never sees the extreme left and right images as stereo pairs in the lenticular process there is no hyper or hypo stereo distortion here.
    But I think that more research into the functioning of the scanning system might lead to a much better understanding of what happens under the lenticular raster if you use it to form the image. It might not work for instantaneous shots but it might be a first step toward reconstruction of the other system. And we wont have “raw” sensor surfaces approaching the size of a A4 page for a long time yet so a scanner is the only way out.

    • Before I moved on to more abstract imagery, I was compelled to see how accurate and real I could be with my imagery. Like music, I think it is important to understand the quality of the pure instrument before you add reverb, distortion, tonal adjustment and effects. There is something about simple purity that is enlightening to understand – but I would be the first to agree that it is less marketable. We are biased against imperfection and asymmetry and have a preference for the way things look that isn’t natural. Going abstract helps to mitigate those biases but gets into the realm of a more referential image to be interpreted as opposed to being experienced. Why else would we spend time on hair styles, makeup, body art/modification and so forth? We are biased to enhance reality and shape it to our mind’s eye image. I suppose one could argue that abstract is real. Never thought about that until just now.

  18. The highly profitable business is for drivers licenses and secure photo IDs, which would require building a lot of camera systems. Covert and Overt elements can be combined in the interlaced graphics also. When you have such an enterprise building cameras and consumables, it’s easy to build cameras for artistic purposes such as portraiture. The Bonnet prism cameras have a fixed key plane distance, and I do not find it possible to change that so use is somewhat limited. Quality however is exceptional, and I believe I have identified the business model for its new life.

    • Government work? Yikes. Very tricky, especially today where decisions have very little to do with cost/benefit/efficacy analysis. If you have inside advocates, and lobbyist friends, you have a slim chance. I certainly have benefited from some government work – but nothing I could count on, it just fell into my lap. Given that DMV “professionals” have difficulty with even the simplest webcams, I would be skeptical of introducing a more complicated system. On the other hand, look at the ridiculous waste of money perpetrated by the TSA with sniffers and body scanners in the name of national security. Money flowed like water. All that in the name of security to protect us which is simply the craziest approach imaginable. Israel laughs at the United States for good reason. So, I guess anything is possible. It is not inspiring to read stuff like this:

      • Hello to all,

        I came across this discussion, and the simple lecture of it brought me more info than all my previous searches!
        Happy to see known faces too ( hello Will).
        I have set up my iwn capture system, moving camera on a rail for still subjects/sceneries and a 7 camera rig for live subjects interpolated to 96 frames using stereotracer from triaxes.
        I print my own lenticular on a 9900 epson printer and hope to find a sharper printer or paper.
        I also promote my work through a website using high end techs to convert my picture series to interactive images, you can check them out at http://www.3dgh.fr.
        As a professional photographer i spend a lot of time to promote those pics and results are sliw to arise but one strp after another they are gaining more and more interests, if anyone has anything on display somewhere in france, I would love to know it and see it!

        Best regards,

        Guillaume d’Hubert

  19. Hi Guillaume, I have found it useful to think in terms of “what subject matter must be in 3D?” A lot of imagery simply does not need to be 3D as it is intended to evoke referential thought and not to experience emotionally. On the other hand, 3D can make it possible to experience an image in an emotional way that traditional photography can’t match. Good luck with frame interpolation – I have not had consistent results and the errors are difficult to understand in terms of their impact on human perception.

    Also, I can appreciate what you are trying to do with your website – but don’t expect anyone else to understand or make sense of it. My other piece of advice, should you be interested, is to consider focusing on one subject matter and work to make it the best it can be.

    Thanks for checking out the blog. Feel free to post questions as I have an opinion on just about everything (and it is worth what you pay for it ;^)

    If you want to make the most of your Epson printer, get a RIP. Ergosoft is quite good, but there are others.

  20. Guillaume, thanks for the note. I had great fun helping my daughter look like Audrey Hepburn recently so we could recreate the Breakfast at Tiffany’s photos. You can see some on my Illuminessence.co.uk website.

    I have Stereotracer but didn’t think you could use it to interpolate 7 frames to X more. Is that what you meant? I’ve only used it for interpolating a stereo pair to X. I use Twixtor in After Effects to interpolate 7+ originals to how ever many I need. See examples on my website under ‘Timetravel’.

    Keep up the good work.


  21. Hi Will,
    I had a look at your website under “Timetravel” and I understand what you are trying to do, but there are a tremendous number of errors in the interpolated frames. These create perception conflicts in the subconscious that are problematic in my humble opinion. Is it good enough? Well, that’s a question you’ll have to answer for yourself but I would respectfully submit that you look closely at the errors and spend some time thinking about what you are looking at. We all need to get over the “wow look it is in 3D” and start looking at the image as we look at any other image. What is good about it and what is bad about it. Forget about it being in 3D, 4D or any “D”. I like to think of multi perspective imagery as experiential imagery. When you experience the image, what do you take away? If it is only “wow” that’s in 3D then you don’t have much. A “good” image will evoke an emotional response with meaning and a sense of story and examination. I’m always asking myself: “why does this image need to be in 3D?” and often times the answer is that it doesn’t need to be in 3D. It is tough to look at something you are passionate about and analyze it in a detached critical way. It is easy to get caught up in how much effort was expended creating the image and miss the reality of it not really being a very good image. I do that all of the time and it is tough to admit, after working many hours on an image, that it just isn’t very good.

  22. Almont, I understand your point, but all the new glasses free TVs etc use interpolation to some extent. I’m helping to develop some of the software for them & have a test 42″ unit here. Shooting with multi view rigs means I can create the 8 or 9 view frames needed to view them on the new screens & I can assure you they look awesome. They’re so much better than the 2D + depth map ones that only give the feeling of depth but not the multi angle ‘look around’ effect you get with 8 or 9 views.

    So, whilst the frames might not be perfect, when you print them they look great, the clients are happy & the more the technique & technology evolves, the better they get.

    The key thing for me though is that it means my portable 7 cam rig can be taken anywhere, shoot anything & doesn’t need tethering to a PC/laptop! Therefore, the overall solution & printed results are more than good enough.

    • One can’t argue with “the clients are happy” so long as they are paying you money!

      In a world where mp3 is good enough I can certainly allow for people thinking interpolated 3D is good enough. For me, the mp3 audio format is terrible as compared to uncompressed pristine audio through very high quality speakers. I think interpolated multi perspective imagery is the same – so long as there isn’t something of quality to compare it to, it will likely be successful just like the terrible mp3 audio format is successful.

  23. Hello Will,

    The secret for “near” perfect image interpolation is 1/ alignment, images must be perfectly parallel, Stm does it well , last 3mk does it even better.
    2/ pictures have to match in color/brightness quitely well
    3/ and that is a problem subject must have very near background or no background, interpolation cannot work on farther than a hand background, unless you use wide angles thing i haven t done YET

    That is for the technical aspect, i renew you my compliment for your recreated audrey hepburn picture, the only serious lack i noticed afterwards is the white that are on my screen a bit overexposed, but on the tweening it works very well including the extreme end of the cigarette.

    Best regards from Paris tonight


  24. Hello,

    First of all thanks for your previous notes on my work, i agree with nearly all of them, and look forward to share my experience with people that are on the same path as i am.

    I personnally don t ask myself if there are good subjects to take in 3D, i am in a process to change my entire practice of photography from 2D to 3D, just like when the cinema turned from silent to speaking , it was the natural evolution and it the cinema could have been speaking from the start it would have. Keep in my that the stereo came just few years after photography was invented but it took 170 more years to get rid of viewing apparatus we are at the very dawn of this era and my point is that depth has to present in every picture just as color and sharpness, because we see the world this way.

    It doesn’t mean that no one will continue 2D photography, but the regular, “normal”, everyday photography will be a photography which will include some depth, optically obtained, or software calculated but in every case DEPTH.

    I 100% agree with what you said on the necessity to produce senseful images first because the wow effect quickly disappear once you are used to it.

    Last statement for me, i personnally love to think that final interpolated images are made of 95% of non existing pictures though they are here, it is a bit magical for me … Emotional in fact.

    Best regards


  25. Guillaume, I absolutely agree, I also find it ‘magical’ that I can take a photo, process it in a few mins, see it glasses free on my Newsight 42″ screen, then give a print of it to friends. Even with my little Fuji W3, it’s poss, as I can interpolate a pair in to 12 etc if I’m careful with the composition, max depth, etc. I recently took a photo of a friend & her daughter when we were out walking, then gave her a lenticular print of it a few days later. They were amazed & her daughter’s taken it to school to show friends.

    If you’re near Paris, I recommend visiting Monsieur Alio who owns Alioscopy & arranging for him to show you some of your own photos on one of his Alioscopy glasses free screens. Then you’ll see how impressive 3D photography can be. If you have one of the screens in your studio to show clients the photos before they buy a print, then can see them for real in 3D.

    Don’t forget that AfterEffects can colour & exposure balance multiple frames, so the end result can look really good. It’s very difficult to see the effect properly in 3D as when you look at them as a GIF to see each frame individually with their imperfections, but, when seeing them as a lenticular or on a glasses free 3D TV, then your brain accommodates for any imperfections, just like it accommodates for any differences between our eyes.

    I also agree that now I’ve got used to 3D with depth, I find normal 2D photos flat & boring. Keep up the good work.


  26. Albert Maly-Motta

    Hi everyone,

    I just came back from France and have been to see the museum in Châlon-sur-Sâone, where they have the Bonnet cameras and many of his images. They have restored one of Bonnets huge OP 3000 scanning cameras and it is in the exhibition.
    They are still working on the restoration of the prism camera. Mr Burder, if you have any of the old cameras, and probably in better shape, I would like to get you in touch with them. Next summer I will go back there and return to the archives.
    The problem of reproducing some of the original images to work with todays lenticular screens is also one they work on. With a high res scan of the original plan films one should be able to down-or upscale the image to fit with one of the modern lens pitches. Even if its a 62 lpi lens originally.
    The curator there told me that Bonnets original engraving machines and cylinders for lens production have all been sold to China or gone to the wreckers.
    I wonder what he used as “master” lenticular screens in the original cameras. Where they made from glass? The one in the OP 3000 was all warped and very probably not the one originally in there.

  27. Hello Albert,

    David Burder is definately the man for Bonnet info and advice as he has (I believe) two remaining prism cameras and a supply of original exposure lenses (which are cast or heat embossed plastic). I believe you are correct assuming originals can be scanned and scaled, but you need to ascertain what pitch lens the originals were exposed thru (which can be slightly different for each camera or original contact lens plate). There is some pitch tolerance for 3D imagery however (which exhibits as “the point of perfect viewing distance”), so once the exposure pitch is determined very close, the next task is to experiment scaling the scans. I recommend not attempting to scan parallel or perpendicular to the ‘line-formed imagery”, but rather at about 45 degrees. I also recommend using a Hell S3900 scanner which is 18,540 dpi input capable, but the small drum may not accomodate 8 x 10 or 11 x 14″ originals. You can scan on the medium size drum and exceed 12,000 dpi (I believe).

    • Yes its possible. I have done it many times. But its not easy – as it takes quite a long time to figure out the exact pitch of the taking lens. A drum scanner is mandatory.

  28. Folks, Re:Vision have just posted a great tutorial on how to take 4 (of my) original frames, remove a rain drop mark from one, using another to reference, colour balance them all, keystone adjust using splines, then interpolate 50 new ones, all using their Rebalance & Twixtor pro plugins to After Effects.

    They’re now working on another showing auto alignment in Mocha – real master class.


  29. Michael Brown

    For those interested in the Bonnet OP3000 camera, here is a nice web page and animated movie:


  30. This system is not look like cost effective. 12 cameras must be extremly expensive. Bu there is easy and cost effective ways with camera slider systems.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s