Share this project

Done

Share this project

Done
The Looking Glass is the first desktop holographic display designed for 3D creators. No VR or AR headgear required.
The Looking Glass is the first desktop holographic display designed for 3D creators. No VR or AR headgear required.
1,301 backers pledged $844,621 to help bring this project to life.
theRenegade, Tobias Drewry, and 19 more people like this update.

Comments

Only backers can post comments. Log In
    1. Evan Kahn on

      @Kelly Luck very cool -- i wonder if they have an SDK

    2. Kelly Luck
      Superbacker
      on

      Very interesting. I backed Muwi a while back, and it occurs to me that that would be an ideal setup for capturing images for this.

      https://www.kickstarter.com/projects/muwi/muwi

    3. Missing avatar

      Martin Castillo on

      Ahh shoot, sorry! I didn’t even catch that detail when reading through the post. Sweet, thats very exciting and thank you for that sample link, that is exactly what i was thinking about. Easier said than done but trying to get as clean and minimal movement each frame to find the balance between fluidity and the extent of the movement being captured. Thanks again! Can we get these already?! Hahaha

    4. Evan Kahn on

      @Tobias Drewry Hi, you do need a multi-camera rig for lightfield video capture, so it's quite a bit more expensive than the static implementation. Rest assured we're thinking about it and may or may not have more info soon.
      @Derik I wouldn't be shocked -- we don't have one of those around the office, but maybe I'll pick one up on eBay for giggles. :)

    5. Evan Kahn on

      @Martin Castillo Definitely! You can see this effect in the lightfield capture of Oliver. The LED cube he is holding was displaying an animated pattern as the camera moved past him; thus, as we change our viewing angle, we see a different animation frame on the cube.
      Surreptitiously taking lightfield photos of people in motion leads to a really compelling temporal/spatial mapping when you see it in the Looking Glass.
      https://giphy.com/gifs/g0gCrJKFH9qnBiEFu4

    6. Evan Kahn on

      @Tim: Yes, Dez is preparing an app to automate constructing and displaying multiview images containing arbitrary image data per view.
      If you don't want to use the "Holoplay Camera" object in our Unity SDK, which automates multiview rendering of a scene, it is possible to disable it and blit directly to the texture it generates, which is what ultimately gets displayed on the Looking Glass.

    7. Missing avatar

      Martin Castillo on

      Sweet! Just a random question. With this capture method could i introduce movement to the static object with a change of each view capture? Similar to the effect that is used in those cheap plastic lenticular printed toys that simulate movement as you move your head left and right? Something as simple as capturing a subject with his arms down when the capture begins on the far left side and as the 45 captures occur towards the right they lift their arms slightly until the capture is complete at the far right so that when this is viewed as you move left and right so will his arms?

    8. Missing avatar

      Tim G on

      Just curious: will the SDK allow us to paint the 45 views ourselves, if we want to?

      Also, I stopped by your booth at siggraph to check out the displays. I really can't wait to get mine ;)

    9. Evan Kahn on

      will repost Dez's note from the previous update here in case anyone has the same question, as he answered it more thoroughly than I:

      "
      Hey Greg!
      Word about Maya 2014. I am doing testing up to Maya 2011 for that exact reason.

      As far as the camera configuration, arcing cameras are called toe-in cameras, which give incorrect results for a device like the Looking Glass. Hence, the cameras are set up in parallel, with the horizontal film offset of each camera modified accordingly.
      The number of cameras is irrelevant to the tools (you can choose whatever you want in the settings). Anything less than about 15 I would say becomes badly apparent when viewed in the Looking Glass, 32 is a good number to have, and numbers above that give you very good, smooth, pretty much imperceptible transitions between views. Higher than 45 becomes mechanically irrelevant (you can physically not see any complete view, so you are wasting resolution and render time).

      Hopefully this answers your questions! Also, for the record, the specs above are my own judgement, and not official claims by Looking Glass.
      "

    10. Evan Kahn on

      @Greg Our SDK typically runs at 45 views, but the lightfield processing app can stitch together an arbitrary number of images. The lightfields shown in the post are built from 32 images but it's perfectly possible to build one out of 45.

    11. Tobias Drewry on

      (found the SDK when I was droo... re-reading the original project post)

    12. greg downing on

      Daniel answered my post, from the previous update, thanks! Sounds like additional views are optional.

    13. greg downing on

      Thanks for the additional detail! Can you clarify, is it 32 or 45 images? In most of your material you say there are 45 views, in your video you say 32 images and in your last post on Maya you show 32 rendered views. Are the 7 additional views optional?

    14. Missing avatar

      Derik
      Superbacker
      on

      I have an old Lytro. I think one of the original ones. Could that be made to display static images in the Looking Glass?

    15. Tobias Drewry on

      Looking forward to this awesome display and the instructable. Personally, I'm very interested in the SDK for full-motion applications -- is this something that'll be available pre-release?