Voxel Vision will be a real life superpower:
The ability to see within and behind solid objects within any virtual world.
The problem: Humans only see in 2D.
The augmentation: See inside and behind objects. Actual 3D vision.
Voxel Vision is a proof of concept of a new type of interface into virtual reality worlds.
It will enable humans to see in 3 dimensions for the first time in our evolutionary history.
- A ubiquitous computer can appear whenever you focus your eyes out to infinity - a screen behind the real life room you are in!
- Surgeons can see the entire interior volume of a patient
- Police officers can within rooms and behind walls to avoid ambush
- Cell biologists can visualize the entire interior of a cell
- Chemists can see the interiors of proteins and other large biomolecules
- Topologists can project 4D spaces onto their 3D vision, analogously to how today's humans project 3D spaces onto their 2D vision
- Windowing systems can display windows in front of and behind each other, so you can multitask with minimal eye movement
- and more!
The tech is three pieces
- Track the viewer's gaze as spherical coordinate, crucially including radius
- Render, with translucency
- Project the rendering sharply onto the user's retina
Elaboration below, and at this Quora answer.
(1) Eye tracking in spherical coordinates must include radius
Spherical coordinates address voxels using three coordinates:
horizontal angle (phi), vertical angle (theta), and radius.
We need to track the viewer's gaze in all three coordinates, including radius.
Anatomically speaking, there are two ways to detect the radius of the user's gaze:
- With cameras, detect how much the eyes are crossed
- With electromyography, detect the contraction of the ciliary muscle (eye focus muscle)
This video depicts how ciliary "accomodates" (adjusts) in correlation with gaze radius. The yellow oval is the eye's lens changing its shape.
(2) Rendering with translucency
Once you know the user's gaze radius, you can:
- render with very low opacity (~5%) for objects < gaze radius
- render with high opacity (~80%) for objects >= gaze radius
In this example, the user is choosing to look beyond the word "HI", instead selecting the gaze radius of the distant pyramids.
The rendering engine should also appropriately camera-blur objects to seem in focus and out of focus, depending on the object position and the gaze radius.
(3) Virtual Retinal Display (projector into eye)
The final challenge is to project the rendering into the viewer's eye.
The difficulty is that the user is continually changing their ciliary accommodation, which means that the light incoming to their eye must be at a different focus depth every time the user changes their gaze radius.
The solution is to use a software-controlled lens to "dance with" the ciliary - counteract every change in the eye lens shape with the opposite change in the technological lens.
This screen + lens combo is called a Virtual Retinal Display. There are two versions:
Conventional screen + lens:
Laser + lens + scanning across the retinal plane:
We will attempt to use the AirScouter display by Brother.
Join our team!
Want to collaborate with us?
We will be building our unit at the Stanford campus, with a target completion date of September 2012.
We also encourage you to fork our project (build a similar device on your own).
Whether you have suggestions or criticism or an offer to collaborate, please message us!
Jon, Ben, Deniz, and you. =)
Support this project
- (30 days)