There’s been a lot of hype about game engines with VR editors in the last months. The first one has been Unreal Engine, showcasing a solution for HTC Vive (because at that time it was the only headset with proper VR controllers) and then of course Unity decided to do something similar, announcing a VR editor with Vive and then Oculus support. With “VR editor” I mean the editor of the game engine that instead of running on your flat screen with you interacting with mouse and keyboard, runs on your VR headset in 3D, with you interacting with objects with your VR controllers.
While everyone went crazy about this idea, I’ve always been very skeptical about this kind of interaction. Main reason is this one: I use Unity 8 hours a day maybe and I’m comfortably seated at my desk, with my hand on the mouse moving fast between menus and shortcuts. With this VR editor I should stand on my feet (for 8 hours?) moving my hands in the air continuously (really?) to obtain exactly the same things. Furthermore in VR we have no UX standards… being more precise we have no idea on how to do an efficient UI, so at present time we can only have bad UIs. Using a game editor in VR would mean become tired and crazy. I agree that a VR mode is necessary, but more to do fine tuning or to test the game from the inside than to do all the development. As the near future I envision more something like BigScreenVR (or Virtual Desktop), where you develop on a flat version of Unity inside VR (so you don’t have to put on and off your headset) using mouse+keyboard and then you do the fine tuning in full 3D VR, using your controllers.
Anyway, I’ve always wanted to try it to see if I was right or not. I was stupidly waiting for a VR update of Unity, when suddenly I realized that Unity VR is a completely different thing. So you will never have a VR mode built-in in your Unity 5.5… you have to find and download it. UnityVR can be found at this repository, where you can also find the link to the guide with all installation steps (RTFM, please!).
To try Unity VR you have to install a special version of Unity 5.4 (Unity 5.4 has a special version for Hololens, one for Daydream and one for VR… we must have a separate 1TB hard drive only for all Unity 5.4 versions…) and then download the Unity VR package. This is the thing that I found strange the most: Unity VR IS A PACKAGE that you add to projects where you want to use VR mode. It is quite a nonsense to me, even if surely Unity guys have had their reasons for this choice.
So, to try UnityVR, you have to:
- download Unity 5.4 special VR version;
- download the UnityVR package;
- create a new project (or load an existing one… but I strongly reccommend you to try this editor with a project that you can throw away, because… well, you’ll understand);
- import in this project your headset plugin (e.g. Oculus Utilities in my case);
- import the UnityVR package;
- Select Window -> EditorVR;
- Put on your headset and your VR controllers and enjoy.
While you “play” you can also activate a preview window to let everyone see what you’re doing. Unity will open this window for you, but you have to push the “Toggle device view” button to show the preview. Furthermore my advice is to make it bigger and to move it to your secondary screen, so you will work better with it.
I’ve read all the docs before using the VR editor and then I started my adventure. First 15 seconds have been “WOOOOOOAH, this is the future!”. Seeing your Touch controllers in the editor scene, with a ray casting from them is amazing!
Then I selected the menu on one of the controllers, as the guide suggests and selected that I wanted to draw a capsule and then a cube. To open a menu, you have to select with one of the controllers the little Unity icon that there is on the other controller. This is a very straightforward way to open a contextual menu. On the “clicked” controller, a cube-shaped menu appears, and with your thumbstick you can select between different “tabs” (actually faces, in 3D) of the menu. It’s all super-simple.
So I started generating my first capsule triggering the index finger and seeing it popping out I was like “WOOOOOOAH, this is the future!”. Really exciting. And with the same finger I selected the just generated cube and moved/rotated it with my hand, just like if it was in there! “WOOOOOOAH… this is… ok, you got it”. Then I also generated a cube and other enthusiasm flowed in my veins.
Ok, if you’re enthusiast about VR editor, stop reading here, so you mantain your happy mood. Otherwise, go on to see the sad truth, i.e. that “this is the future, but not the present“.
Because everything that came after that was true pain. Raycast selection never worked for me. I tried pointing an object and then pressing every possible buttons but this resulted in nothing apart from me looking like an idiot. So I could only select nearby objects. Furthermore, the index finger being a “trigger for everything” is a negative choice. You select with your index trigger, but with the same trigger you also continue executing the last selected action. So whenever I tried to push the index trigger, when in doubt the editor generated a cube. Either I was very precise in selecting the action/object, or I would generate a cube. In the end my scene was filled by cubes of every dimension that were everywhere. I was overwhelmed by cubes. This is the scene as seen from the inside after 10 minutes of usage.
And this from the flat editor view.
When you manage to select an object, a contextual menu pops out on your controller that selected it. With your thumbstick you select the action of interest… and this is not so comfortable because Touch thumbsticks are not the ideal choice for this. Within this menu there are lots of actions you have no idea what they do, because there is no tooltip, only icons (Unity guys know this and underline this very well in the docs). Even if you, thanks to some kind of spiritual connection with Unity devs, manage to understand what the buttons do (or, most likely, you press random buttons and see what happens), not always the button do what it promises. I selected the scale handle, then tried to scale the object and… nothing happened. Sometimes the object auto-deselected itself, other times it just didn’t scale. The same for rotation and other stuff. So, in the end, you can only generate cubes and never modify them 🙂
Performances of the editor are poor: really I have a great CPU and GPU and after some cubes and windows it started having some performance issues (it started running not so smoothly). Again this is a known issue: in fact in the docs the advice is to close everything that you don’t need.
Movement is made through standard locomotion (you move with thumbsticks… welcome motion sickness) or teleportation (hate it…sometimes I pressed the B key by chance and then I was forced to teleport somewhere). All movement happens like in a FPS.. on an ideal floor (but aren’t we in 3D?). There’s also way to move up and down but requires a mechanic I did only get partially from the docs.
If you’re wondering how to use the environment without the Editor Windows (inspector, etc…), well Unity has a surprise for you: Workspaces. Workspaces are like 2D panels representing the Editor windows that you can put everywhere in your 3D scenes. They’re not exactly 2D: they’re like thick quads, like big tablets that you can put anywhere nearby you to manipulate your scene. Here you can see a screenshot of the Profiler workspace.
You can interact with this panels as you do in your standard 2D editor: so you can instantiate prefabs, change objects hierarchy, etc… There’s also a super-interesting new panel: the MiniWorld. The MiniWorld is a little representation of your complete Unity scene that lets you manipulate all objects of your scene moving just inside a little space. It looks like an incubator, inside which you can see a minion version of your objects. This is super-useful since otherwise you should navigate to many different locations of your big scene to change different objects and this is tiresome and awkward. With the mini-world, you can just make all the modifications to all objects from a single place, like if you were in standard Unity and you were manipulating the Scene window.
Workspaces are cool, except from one problem: you have to interact with them. This means pressing the index trigger… and this results in generating bazillions of new cubes inside the scene.
Furthermore, there are still lot of problems in interacting with them (sometimes it’s hard to trigger commands or simply understand what you’re supposed to do) and they have performances issues.
In the end I used UnityVR for something like 45 minutes and my final answer about using it is:
My final verdict is negative, but I like this project a lot… so let me say something in defense of Unity devs:
- Maybe it was made for Vive controllers and the porting to Touch is not so perfect, yet;
- In the documentation it is written everywhere that this is experimental; furthermore in the docs they themselves highlight lots of the problems I had;
- They’re inventing a completely new UX system and this is super-difficult and of course requires time;
- I’ve used it for few time: maybe reading the docs super-carefully and using it for days I would have learned how to use it properly (but, honestly, like a wise man said: “UI is like a joke: if you have to explain it, then it’s bad”);
So, like in everything about VR: we have to be patient. Unity VR will come and will be super-awesome, we just have to give the developers the time to implement it. What they’re trying to do is incredibly difficult and I guess it’s all a trial-and-error process.
Hope you liked this review (as always, please like and share it) and after reading it I invite to try Unity VR by yourself to make an opinion of yours about it!
(Header image by VRGamer.it)