Typing is only (for now!) capable of typing sentences in English, with basic punctuation, i.e. commas, periods, exclamation marks and question marks. We're working on adding more symbols and numbers, but for now we wouldn't recommend using it for programming or typing a language other than English.
While Gest can track your hand's rotation and finger position (what the hand is doing) with a high degree of precision, it cannot track your hand's absolute position in space (where your hand is), so you can't use it for VR games where the positions of your virtual hands play an important role.
No! While our end goal is to create an interface that does everything a keyboard and mouse can do (but better!), we think the best way to use Gest is in conjunction with them. There are certain things that the mouse and keyboard still fail to do well, and that's where Gest truly shines.
There will be four main types of data you can access. The first is raw sensor data coming directly from our IMUs that gives you the most control, but will require non-trival programming to make this data useful. The second is access to motion processed data, so finger and hand positions at discrete points in time, making it really easy to integrate with things like VR/AR devices and video games. Third, you’ll have access to gesture data. You’ll be able to train your own gestures, and get notified when they are made by the user. Finally, you'll also have access to typing data if you have two controllers, meaning you'll get notified of keypresses, word predictions, changes and selections. As we continue developing Gest we'll keep you in the loop about specifics and how our API evolves over time as we work through our private beta early next year.
Out of the box you’ll be able to use your Gest to control Adobe Photoshop on Mac and Windows computers that support Bluetooth 4.0. We’re also making it really easy to create your own custom gestures and map them to keyboards shortcuts and other actions through our own free software. This software will allow you to map gestures to actions just by dragging and dropping, so you don’t need to be a programmer. Finally, we’ll be adding support for typing on Windows, Mac, iOS, and Android.
If you want us to integrate with other software please let us know in the comments section. While we’re only planning to support Photoshop on day one, developers will be able to use our SDK to integrate with whatever applications they’d like.
You won't be able to choose two rewards on Kickstarter, so during the campaign your only option would be to purchase the pair. In the future you'll be able to add a single hand to your pledge for $175 after the campaign ends. If you do decide to end up purchasing a second hand later, typing will still work even if the controllers are purchased separately.
We infer the thumbs movement through hand movements without a sensor directly on it. If you check out our typing video, you can see the swiping gesture we use to select words (spacebar). We can also figure out if you're doing gestures involving the thumb (like grabbing or pinching) by looking at what the other fingers are doing.