Previously, we saw how to set up hand tracking, and we learned about grab interaction, poke interaction and ray interaction. If you haven’t checked out the first part of this series, we highly recommend checking it. In this blog, we’ll learn how to create our hand pose and use the pose detection component to detect it. Upon detection, we’ll perform some action.
Before we start:
You need to have a good understanding of how the hand pose is detected. You can check out the documentation here. To summarize:
There are different hand poses that we can do, like thumbs up, thumbs down, rock, paper scissors, etc. If you have tried out the sample scene that comes along with the SDK you might have seen it already. But, if you want to create your own you need to understand the basic concept of how the poses are detected. The hand pose detection happens in two steps one is shape recognition and the other is transform recognition.
The shape recognizer component considers the state of the finger joints and the transform recognizer component considers the hand’s orientation in 3D space. Each finger is configured by listing the desired state of one or more Finger Features: Curl, Flexion, Abduction, and Opposition. To learn more you can check this documentation shape recognition.
The transform recognizer component checks the position and orientation of the hands. For example, if the palm is facing up, down, facing the face or facing away from the face. It recognizes other parts of the hand too, for example, the wrist and finger. To learn more you can check this documentation on transform recognition.
There are various hand poses we can create and for this blog I am choosing the Spiderman hand pose. Creating a pose is simple, it’s a scriptable object that comes along with the Oculus SDK, all we need to do is configure it to match the pose we require. So to create a pose:
With that we have created a pose that can be detected, next we’ll add the components that are required for pose detection.
For the pose detection to work correctly, we’ll have to make sure the right components are added and the correct parameters are referenced. So, ensure that you’ll follow the exact steps given below to add pose detection to the scene.
Before we use the events to carry out some tasks, let's add debug visualizers to see if the pose detection is happening correctly.
The SDK comes with a few debug visualizer prefabs which we can make use of to visualize the hand pose. The visual cue will give us a better understanding of how the hand pose is getting recognized. This will allow us to quickly identify the problem and debug it.
Now that we have verified that the hand pose works correctly, disable the PoseDebug GameObejct and move on to the next section.
We can make use of the Unity events from the Selector Unity Event Wrapper component to carry out some tasks. For now, we can disable the Cube GameObject when the pose is detected and enable it when the pose is not detected. So, to do that:
With that, we have finished setting up or scene with pose detection and the task that has to be carried out when the pose is detected.
Remember that we have added the task to disable and enable GameObject only for the left hand on pose detection. So when the pose is detected on the left hand the cube should disappear and when the pose is no longer detected it should reappear. You can hit the play button and test it out.
In this blog post, we saw how to create our pose and use the events to perform some tasks. There are various ways you can use pose detection, it can be used to open menus, locomote, a simple game, learn sign languages and many more. Make sure to create your pose and create some amazing experiences.
If you've enjoyed the insights shared here, why not spread the word? Share the post with your friends and colleagues who might also find it valuable.
Your support means the world to us and helps us create more content you'll love.