Monday 9 February 2015

Unity Cardboard test #2

Following on from my initial test constructing a VR environment for Google Cardboard on the Unity game engine, I've been playing with capturing user input on the Cardboard system.

For this purpose, I've knocked-up a test room which I can populate with different objects.  Initially I've thrown a chair in there, to allow me to test whether an object has focus (i.e. dead centre of the viewport).

Test room design, using standard Unity shaders.  Approx. 1000 triangles (total) with four 512x512 textures.
Unity has a special library of low-cost mobile elements, optimised for smartphones & tablets, which have weak graphics & processing facilities (when compared with PCs).  Early iPhones are somewhere in the region of 20 draw calls per frame; to put that in context, a standard PC-style background skybox will eat up something like 6-10 drawcalls on its own.

The first thing I wanted to try to see how Unity's lightweight mobile shaders compared against standard shaders, because adding an extra camera (i.e. stereoscopic vision) is going to add some extra load (even if each camera's viewport is half the size).

The image above shows the version using standard shaders and a point light source, which pulled about 16-32 drawcalls on average.  On my battered old Galaxy Note, it ran OK but was a little sluggish to respond.

The image below uses mobile shaders, which pulled things down to 8-16 drawcalls.  The models & materials could be optimised further but I wanted to just get a ball-park feeling.  As you can see, there's a considerable difference in appearance, the chief one being that any lighting effects are going to need to be baked-in to the environment textures.

Same room model but using mobile shaders.
The other thing I wanted to test was the Cardboard SDK API.  Extracting info from Google's demonstration file, I wrote a script which kicked-in when the camera was looking at an object, changing the objects's shader to make it "light up".

Chair has been "lit" to indicate that it is currently selected.
Finally, I added some further script code to respond to the magnetic trigger on the side of the headset.  This would rotate the object by 45 degrees for each 'click', but only while the object is selected.

A rotated chair.  It's not exactly Halo 5 but it's a start.

All-in-all, this has been a very successful experiment.  I've gained a feel for how to interact with the API and now have the ability to interact with the user more.  Next up: movement and switching between objects.