Funded! This project was successfully funded on September 10, 2011.

Photo-main

Virtual reality hardware and software for x-ray vision

Voxel Vision will be a real life superpower:

The ability to see within and behind solid objects within any virtual world.

The problem: Humans only see in 2D. 

The augmentation: See inside and behind objects. Actual 3D vision.

Voxel Vision is a proof of concept of a new type of interface into virtual reality worlds.
It will enable humans to see in 3 dimensions for the first time in our evolutionary history.

Use cases

  • A ubiquitous computer can appear whenever you focus your eyes out to infinity - a screen behind the real life room you are in!
  • Surgeons can see the entire interior volume of a patient
  • Police officers can within rooms and behind walls to avoid ambush
  • Cell biologists can visualize the entire interior of a cell
  • Chemists can see the interiors of proteins and other large biomolecules
  • Topologists can project 4D spaces onto their 3D vision, analogously to how today's humans project 3D spaces onto their 2D vision
  • Windowing systems can display windows in front of and behind each other, so you can multitask with minimal eye movement
  • and more!

The tech is three pieces

  • Track the viewer's gaze as spherical coordinate, crucially including radius
  • Render, with translucency
  • Project the rendering sharply onto the user's retina

Elaboration below, and at this Quora answer.

(1) Eye tracking in spherical coordinates must include radius

Spherical coordinates address voxels using three coordinates:
horizontal angle (phi), vertical angle (theta), and radius.

We need to track the viewer's gaze in all three coordinates, including radius.

Anatomically speaking, there are two ways to detect the radius of the user's gaze:

  • With cameras, detect how much the eyes are crossed
  • With electromyography, detect the contraction of the ciliary muscle (eye focus muscle)

This video depicts how ciliary "accomodates" (adjusts) in correlation with gaze radius. The yellow oval is the eye's lens changing its shape.

(2) Rendering with translucency

Once you know the user's gaze radius, you can:

  • render with very low opacity (~5%) for objects < gaze radius
  • render with high opacity (~80%) for objects >= gaze radius

In this example, the user is choosing to look beyond the word "HI", instead selecting the gaze radius of the distant pyramids.

The rendering engine should also appropriately camera-blur objects to seem in focus and out of focus, depending on the object position and the gaze radius.

(3) Virtual Retinal Display (projector into eye)

The final challenge is to project the rendering into the viewer's eye.

The difficulty is that the user is continually changing their ciliary accommodation, which means that the light incoming to their eye must be at a different focus depth every time the user changes their gaze radius.

The solution is to use a software-controlled lens to "dance with" the ciliary - counteract every change in the eye lens shape with the opposite change in the technological lens.

This screen + lens combo is called a Virtual Retinal Display. There are two versions:

Conventional screen + lens:

Laser + lens + scanning across the retinal plane:

We will attempt to use the AirScouter display by Brother.

Join our team!

Want to collaborate with us?

We will be building our unit at the Stanford campus, with a target completion date of September 2012.

We also encourage you to fork our project (build a similar device on your own).

Whether you have suggestions or criticism or an offer to collaborate, please message us!

Jon, Ben, Deniz, and you.    =)

FAQ

  • The uplifted head picture is from the cover art of the album "Transhuman" by the band Believer:
    http://en.wikipedia.org/wiki/Transhuman_%28Believer_album%29

    http://Lytro.com makes a "plenoptic" camera that can record some of the "light field" data necessary to show real life scenes as focusable voxels:http://lytro.com/renng-thesis.pdf

    Last updated:
  • Sensors -> 3D model -> Rendering

    This is how sight can be abstracted. A 3D model can be built using a combination of sensors from MRI to ultrasound to Googled architectural blueprints.

    These can be built into a 3D model of the room around the user...

    Which can rendered whichever way the user wants, including Voxel Vision.

    Last updated:
  • The end goal is to make this hardware + software combo for sale at around $1,000 per unit.

    Here is the plan:

    September:
    Apply for independent research units, allowing us to take reduced courseloads.
    Apply to BASES Forge, a year-long startup accelerator for Stanford students. They could fund us up to $10k.
    Apply for an ASSU Executive Action Grant intended to boost student entrepreneurship.
    Apply for an Student Research Grant for up to $1.5k.

    October:
    Recruit a team of about 2 graphics coders, 1 3D graphic artist, 1 lens specialist, 1 screen specialist, and 1 laser specialist.
    Pursue mid-level publicity on the Stanford homepage, the Stanford Daily, the MIT and Stanford Entrepreneurship Reviews, the Startup Digest, etc. Goal here is to get angels to care.

    January:
    Apply to BASES E Challenge and Social-E Challenge

    March:
    Apply to demo at BASES Product Showcase

    Summer 2012:
    Any startup accelerator

    By September 2012, hopefully we will have a product demo ready, and we will then easily meet an angel who likes our prototype and believes in human augmentation enough to support us in bringing the mechanism to market.

    Last updated:
27
Backers
$5,528
pledged of $5,000 goal
0
seconds to go
  • Pledge $20 or more
    You selected

    15 backers

    Give $20 and get a fitted athletic t-shirt with a schematic of how the prototype works!

    Estimated delivery:
  • Pledge $100 or more
    You selected

    2 backers

    A white lab coat, with the URL of the project stitched on the back across the shoulders! Look awesome while you do science. =)

    Estimated delivery:
  • Pledge $200 or more
    You selected

    3 backers

    Contribute $200 and you win a private demo of the prototype! This will be at Stanford, CA during the 2011-2012 school year. You may assign your slot to a friend.

    Estimated delivery:
  • Pledge $3,000 or more
    You selected

    1 backer Limited (4 left of 5)

    == THE FUTURIST== For $3,000, we will custom-build a second prototype just for you! Have the Voxel Vision hardware months or years before they go on the general market!

    Estimated delivery:
Funding period

- (30 days)