version 1.1

VR Infinite Gesture is a plugin for the Unity game engine.
It uses neural networks to track gestures in VR.
It's smart, so you don't have to be.

/// GETTING STARTED ///


In this tutorial we'll get the plugin setup and try out recording, training, and detecting gestures from a new neural network.

You can also watch this tutorial as a video LINK

There's also an interactive version of this tutorial included with the plugin. 
You can find it by importing the tutorials package located here:
Assets/Edwon/VR/Gesture/Tutorials/Tutorials.package
1. Setup VR
2. Import VR INFINITE Gesture from the asset store
3. CHOOSE VR TYPE
4. MAKE A NEW SCENE AND SET IT UP FOR VR
5. Create a new Neural Network
6. Add some gestures to the network
7. Hit play and enter VR
8. ReCORD ONE "LINE" GESTURE EXAMPLE
9. Record more “Line” gesture examples
10. Record some “Rainbow” gesture examples
11. Edit the gesture examples
12. PROCESS the neural networK
13. Check out the brain
14. Detect some Gestures
15. Yay! 

/// USING THE EXAMPLES ///


In this tutorial we'll setup our examples scenes and example data and see how they work.

You can also watch this tutorial as a video HERE
Don't worry, It's the exact same content as this written tutorial!
1. Import the Examples package
2. setup the "example 1" scene for vr
3. try out "example 1" in vr
4. take a look at the "Example 1 player" script
5. setup the "example 2" scene for vr
6. try out "example 2" in vr
4. take a look at the "Example 2 player" script

/// HOW TO BUILD ///


In this tutorial we'll make a build of your game using VR Gesture Tracker.
1. DISABLE "SHOW VRGUI"
2. TURN ON "BEGIN IN DETECT MODE"
3. BUILD AND PLAY!

/// INTEGRATION ///

There are two ways to integrate VR Gesture Tracker into your game.
Integrate using c#
integrate using playmaker

/// EVENTS ///

These events are broadcasted by the GestureRecognizer at key moments.
Subscribe to them in your scripts
GestureDetectedEvent ( string gestureName, double confidence, Handedness hand, bool isDouble ) 
- gestureName - the string that is the title of the gesture you recorded and trained into the network-
- confidence - how confident the neural network is that the classification was correct
- hand - the hand that performed the gesture (ignore this if isDouble is true)
- isDouble - if true this was this a double handed gesture done with both hands
When a gesture is detected, the neural network attempts to classify the gesture and the GestureRecognizer class will broadcast an event that you can subscribe to.

When the neural network isn't "confident" enough that the users gesture was accurate, it will broadcast a GestureNull event. You can use this event to give negative feedback to the player that they didn't correctly perform a gesture.

Make sure in your logic to check that "isDouble" is true if you want your code to react to double handed gestures. Even if only one hand performs a gesture, this event will still be called, but isDouble will be false.

If you only want your code to only react when both hands did the double handed gesture, make sure to check that "isDouble" is true.
GestureNull ( string nullDebugInfo );
- nullDebugInfo - some debug info that gives info about what the network thinks it was
To explain what's going on at a lower level here: 
The neural network never actuallly returns a null, instead, it always thinks you've done some gesture, even if it didn't resemble any of the previously recorded/trained gestures.

Therefore, we use the "Confidence Threshold" setting on the VRGestureManager as a way to detect null gestures. Any gesture that returns a lower confidence than the threshold will cause the VRGestureManager to broadcast a GestureNull event.

We also give you the string “nullDebugInfo” so that you can see what’s going on when a gesture is null. This can help you improve your tracking data.

For example, if a gesture is often returning null, you could take a look at it’s confidence in the nullDebugInfo and re-record / train the neural net with improved data to get the confidence above the “confidence threshold”.
StartCapture ();
This is called when the user first presses the record button down.
ContinueCapture ( Vector3 capturePoint );
- capturePoint - the point that was captured just now
This is called each time a point on the line is captured.
When the user holds down the record gesture button, we begin capturing points.
Each time a point is captured, it's added to the line that eventually becomes the gesture.
The Vector3 capturePoint is that point in world space.
StopCapture ();
This is called when the user lets go of the record button and the gesture has been captured.
This is always called at the end of a gesture capture, it doesn't matter if the gesture was recognized or not.

/// IMPROVING DETECTION ///

One of the coolest things about using a neural network for gesture detection is the speed of iteration. You can really quickly record more examples, edit them, train them into the network, and try them out! Keep doing this loop a few times and you can very quickly get to super reliable accuracy.

RECORD, EDIT, TRAIN, DETECT

That’s the loop! We designed the VR Gesture UI around this so that you can really quickly hop into VR and improve your gesture detection.

Here are some extra things to think about as you record gestures.
  • The gesture recognizer is aware of the position, orientation, and direction of the gesture, as compared to your head.

    For example, a line drawn on the left side of your head is different than a line drawn on the right side of your head. A straight line drawn in front of you is different than a straight line drawn in front of you while looking up.
  • The beginning and end of the gesture are important.

    For example, If you wanted the neural net to classify circles drawn clockwise and counter clockwise, you would need to give it equal parts examples of both types. So, record 50 takes of clockwise and 50 takes of counter clockwise in the same gesture called “Circle”.
  • Use multiple gestures with similar names to solve complex shapes.

    With some shapes the amount of possible ways to draw them can get very complex, making the chance of error much higher. You can help prevent this by recording multiple gestures with the same name, e.g. Circle Clockwise, Circle Counter Clockwise, etc.. but have your code react to all of them in the same way.
  • Make sure each gesture has about the same amount of recordings

    For example, if you have 100 recordings of circle, make sure there's about 100 recordings of all the other gestures you want to detect as well. It doesn't have to be exactly same, just keep in mind that the neural net can more easily recognize gestures (so if you have 100 recordings of circle, make sure there's about 100 recordings of all the other gestures you want to detect as well)re that have more examples, so to increase that specific gesture's chances of being detected record more examples of it.

VR GESTURE SETINGS

The VRGestureSettings window can be opened from the Tools menu under:

Tools > VR Infinite Gesture > Settings

From the settings you can see all of your neural networks, rename gestures, process the neural networks, and adjust any of the plugins preferences.

SETTINGS

Vr Type
enum
choose OculusVR or SteamVR (same as OpenVR)
when you click on this and select a Vr Type the plugin attempts to automatically setup your project for VR by adding a define to player settings
the define will be either EDWON_VR_OCULUS or EDWON_VR_STEAM
this ensures that only platform specific code is compiled
make sure you have either Oculus Utilities or the SteamVR plugin installed before choosing Vr Type

Begin In Detect Mode
bool
This is what you'll want to enable once you start using VRGestureManager in your game. This makes sure that the vrGestureManager is in detect mode automatically, and is firing events when it detects a gesture from the player.
Make sure to select the neural net you'd like to detect with as well using the popup that appears once "Begin in Detect Mode" is enabled.

Confidence Threshold
float (0-1)

On the VRGestureManager in your scene, there is a setting called "Confidence Threshold". If the confidence is above this threshold, it will broadcast the "GestureDetected" event.
If it's confidence for the gesture was below the threshold, it will broadcast a "GestureNull" event. We've found that keeping the confidence threshold at about 0.98 is the sweet spot. It really depends on your gestures though and how many you have, you might want to adjust that to anywhere between 0.90 to 0.99.

Minimum Gesture Size
float
In testing we found that if the Neural Net attempts to classify very small gestures, it will almost always classify it with a very strong confidence, which results in it thinking the user is trying to shoot a fireball when they maybe just accidently pressed the gesture button for a second. To prevent this type of thing, we have the Minimum Gesture Axis Length, which prevents the neural network from even trying guess at a gesture if it is below that size.

Gesture Sync Delay
int (miliseconds)
This has to do with the way double handed gestures work.
Double handed gestures are really just a left and a right hand gesture done at about the same time. This delay parameter decides how close together (in time) the left and right hand need to do the left and right hand parts of a double handed gesture for it to be recognized as a double handed gesture.
Lowering this value means players must finish the left and right hand gestures at almost the same time, increasing this value means players can finish the left and right hand gestures farther apart (in time).

ROADMAP

These are the features we've thought about for future versions of VR Gesture Tracker.

None of these are guranteed features, but we just wanted to show you what we've been thinking about.

Please contact us if there's a specific feature you'd really love to have. We take requests seriously and each email counts as a vote towards that feature.

Strikethrough features have already been implemented, yay!
  • double handed gestures
    record and detect gestures with two hands
  • head orientation options
    track gestures relative to head or not relative to head
  • option for training to be spread out over time to prevent in-game performance slowdown
  • ability for users to call events that start and stop gesture capture/recording etc...
    this is currently handled through button presses on the controller only
  • system for players to record/train gestures in game
  • continious gesture detection
    currently can only detect when holding down the button and releasing upon gesture finished
  • neural net fidelity options
    we currently always capture 11 points on a given gesture's line
    this number is currenlty difficult to change, as it's synced to the neural nets size
    but we could potentially provide an option for this if enough people want it
    for most gestures, 11 points are enough
  • neural network can listen to velocity, not just shape
    currently the neural network is only looking at the shape of the gesture

CONTACT US

Please contact us with any questions AFTER reading the tutorials and documentation carefully!

If you couldn't find the answer in the docs, please be specific in your message. Tell us exactly what you're having a problem with, copy/paste error messages, what's going on in your scene, etc... aka help us help you.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form