Virtual reality is all about having an immersive experience in a virtual world. One way to achieve that is to have experiences in your virtual world that users can relate to in the real world.
In this blog post, you will learn how to create an experience of taking regular pictures and selfies in VR. If you hand a phone and a selfie stick to a person, it is for certain that he/she will place the phone on the selfie stick and then tries to take pictures. This is true in Virtual Reality as well! You can create the best experience in VR by making sure that the selfie stick, the phone, and their functions work just as they do in real life.
This is one of the simplest mechanics which can be implemented, it requires a few of the components from the XR Interaction toolkit and four simple scripts. But before we begin let's look at the prerequisites.
You must have a basic knowledge of installing the XR Interaction Toolkit, its components and properties, working with prefabs, and the basics of C# as well.
You should know how to set up a scene with a ground plane and an XR Rig. If not, this tutorial will help you get started with the XR Interaction Toolkit. Furthermore, you can learn about prefabs in this YouTube video.
Note: This was built and tested in Unity version 2021.1.14 and in the XR Interaction Toolkit version 1.0.0-pre.8.
While developing this project I made use of the XR Simulator for testing. I felt it speeds up the development process because putting the headset on and off, trying to get your controllers, and being very cautious with your surrounding is time-consuming. Also, for a person like me who wears glasses that process can be inconvenient. But in the end, once all the features are incorporated, it should be tested using the headset.
In this section, we'll set up the model by adding a few components to it from the XR Interaction Toolkit. We'll also learn how to use a render texture to display the camera's output.
By the end of this section, we will have a selfie stick that can be grabbed and a "realistic" phone which can be attached to the selfie stick.
Let's start by importing the prefabs for the selfie stick. You can download the model from here.* Or you can use any asset of your choice.
*( CC: "Selfie Stick" (https://skfb.ly/6WT6V) by Mason is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). )
Once you have imported the models, follow the steps below:
Note: The transform value changes depending on the scale/global position, the transform values seen in any of the below images/gifs might not be the same for you, you need to adjust the transform as per your scene.
Add the XR Grab Interactable component to both GameObjects, the SelfieStick, and the Phone. The RigidBody component gets added as well since it's a required component for the XR Grab Interactable.
When the phone is held close to the holder of the selfie stick, the phone should get attached to the selfie stick. For us to do that we need to make use of the XRSocketInteractor component.
The GameObject Phone at its current state is just a 3D model. To make it work just like in real life, we need to add three components i.e two cameras and a quad. The two cameras act as the front and back camera while the quad forms the display for the camera's output.
Note: This can also be done in 3D modeling software like Blender. If you are able to, then feel free to do the same in 3D software and then import it.
Let's implement this by creating two new GameObjects as a child of the Phone and naming them as "BackCamera" and "FrontCamera".
Note: The larger the size, the larger the amount of processing required. This can cause momentary lag, so you need to find the optimum size.
<div class=callout><div class="callout-emoji">💡</div><p style="margin-bottom:0px;">Note: There is also another way of doing this. You can drag and drop the render texture on the GameObject Display and Unity will automatically create a Material for you.<p></div>
To make sure the phone and selfie stick is in the right orientation every time it's grabbed, we can make use of the Attach Transform feature of the XRGrabInteractable component.
Just like in the real world, the VR Phone should have the ability to click and save pictures. For that, we need to write a few scripts.
Let us start by creating the functionality that will save the picture in your local drive. For that create a new C# script, name it as SavePicture.
The following script takes the given camera and saves the output that is seen on the render texture.
Now let's create the functionality that will detect the controller's trigger press to click the picture. To do so, create a new C# script and name it ClickPicture.
The following script takes the input from the controller and calls the function from the SavePicture script to save the picture on the local drive.
Alright, we have the SelfieStick, the Phone, and the additional scripts ready. Now it's time to stitch them all together.
To simplify the mechanics, we will have the back camera turned on and the front camera turned off by default. When the Phone is placed onto the holder of the selfie-stick, the front camera is turned on and the back camera is turned off, vice versa when removed from the selfie-stick.
With this, we have finished setting up our phone and selfie-stick with all the functionalities in VR. You go ahead and test it now!
You might observe that while using the front camera, it renders the mirror image on the display. It's not possible to flip the camera, so we can correct this by flipping the display instead.
Now the display will render the mirrored image for the front camera.
There is another possible issue you could notice! When the selfie stick is moved the phone passes through the holder. To fix this the GameObject Phone has to be made a child of the GameObject SelfieStick. It's also important to unparent the GameObject Phone when it's removed from the holder.
Congrats! We have successfully fixed the jitters.
This tutorial not only taught you how to take pictures in VR and save them, but also taught you how you can use RenderTexture to display the camera's output. So what can you do next?.
You can either extend this project by adding or modifying some elements. For example, you can add a timer to take pictures or modify the size, orientation of the phone/selfie stick. You can also create a whole new project with a slightly different mechanism. For example, you could probably create a tablet with UI buttons. One to take pictures and another to switch between the two cameras.
There are many other things you can do with the render texture and a camera as well, like creating a mirror to see your avatar or casting your phone on a big screen, etc.
One main concern with VR features is to make them as smooth as possible. Everything that doesn't work perfectly or has hiccups, breaks the immersion. But keep in mind that in VR things can be completely different than in real life.
If you've enjoyed the insights shared here, why not spread the word? Share the post with your friends and colleagues who might also find it valuable.
Your support means the world to us and helps us create more content you'll love.