Virtual Reality Interfacing: Part 1

Future Colossal Blog

Introduction to Inputs for Virtual Reality

by Jake Lee-High

Virtual Reality Input
Virtual Reality Input

While I see lots of discussion around Virtual Reality hardware, there is a real void in conversation around the challenges and advantages in creating VR content. In our coming blog posts I’d like to review our learnings from working on Shadows of Isolation, what I predict for the future of Virtual Reality content, and what challenges and limitations still exist.

First I start with an in-depth look at the standard inputs for Virtual Reality. In order to do so I’ll have to dive into the hardware conversation a fair amount, but I’m doing so out of a desire to better explain how this effects the VR game/experience design.  As there is a lot to discuss, I’ll break this into multiple blog posts starting chronologically, working towards our control scheme designed for Shadows of Isolation, and eventually discussing where I feel the future of VR control should go and what challenges remain in both hardware and game/experience design.

Inputs for Virtual Reality

Through the development of Shadows of Isolation we explored many different control schemes and forms of input. We ended up developing our own control scheme based off of a hacked placement of the Oculus DK2 tracking camera. This allowed for a 360 tracked interfacing with the virtual environment without the need for a controller or problems with virtual collisions present in walking based control schemes like omni-directional treadmills and the Valve/HTC Vive. Before we get there though, let’s first dive into the more traditional inputs.

As video game technology is what is driving Virtual Reality today, much of the interfaces and inputs are based on tradition modes of gameplay: keyboard, mouse, and gamepad. These make sense for gaming on a screen where users are already translating their actions to a separate and removed environment (monitor). With VR they become a barrier to the persistence that VR content and hardware creators strive to achieve.

Keyboard and Mouse

The keyboard and mouse provides a rich interface with an abundance of input options, but when you can’t see them without removing your HMD they become limited. The best use of the keyboard I’ve seen as an input is with the demo Don’t Let Go, where players must keep two fingers on their keyboard as long as possible as the game designer tries to scare them with different gimmicks. This is effective because it is aware of the limitations of the input device while utilizing it to predict player’s physical position in order to interact with it in virtual space. I tend to describe this more as a game mechanic then a control scheme though.

vr-dont-let-go
vr-dont-let-go

[youtube link="https://www.youtube.com/watch?v=xJSHxJYDXrk&feature=youtu.be"]  

Gamepad

The gamepad has many advantages over the keyboard and mouse. It is designed to be more mobile and used blindly while still providing a wide range of input possibilities. For most gamers this is a fairly seamless interaction because they have trained their brain to instinctively translate virtual motion with their hand movements. For everyone else this is extremely abstract. Worse still is that our thumb move at a much faster speed then it takes to move our whole body. This makes the gamepad an easily nauseating experience.

VR Game Controller
VR Game Controller

[youtube link="https://www.youtube.com/watch?v=yczrE08LJ-0"]

This was easily seen with the first Virtual Reality demos released with the Oculus Rift. These were mostly ports of existing screen based video games and didn’t account for user’s sensitivity to virtual motion. Players would quickly run from 0 to 30 mph with a flick forward of the thumb, they would turn too rapidly, and the worst of all strife. Much of this is solved simply enough: players need a slow acceleration but can handle fast speeds (our brains/bodies need to adjust to the change in speed), rotational acceleration needs to be tempered, and the strife should be avoided at all costs.

The larger problem is that even when these issues are addressed correctly, the act of translating finger movements into virtual body movements takes a conscious effort for the average user. In doing so it pulls the user out of the experience and breaks persistence.

WII-housings
WII-housings

Custom designed controllers are also abundant and I feel will become a big part of typical VR gameplay. The easiest example of this are virtual gun controllers. The guns can have a similar feel to the virtual gun being used by the player which allows for an experience with a greater level of persistence. This is only effective for standard gameplay methods though as creating unique player inputs for every game wouldn’t be commercially viable. An interesting model to look at is the Wii controller and it’s many controller housings. Cheap plastic housings were made that augment the look and feel of the controller to match gameplay. Housings like guns, tennis rackets, and golf clubs more closely approximated the feel of the gameplay than the controller alone was able to.

I don’t think we will see the death of the gamepad as the dominant control scheme for Virtual Reality for quite some time. Most likely the game pad will shift forms instead in order to create the hybrid controller/hand tracking systems discussed in Part 2.

Coming Next...

Next is where things start getting really exciting! I'll dive into hand tracking, hybrid controllers, and what this means for game/experience design. Following that we'll look at full body tracking systems, virtual/physical experiences, biometrics, and more.

Notes: Header image of the Perception Neuron motion capture system for VR and VFX.