The Problem with Reality
Oculus Story Studio Blog
Posted by John Ballantyne
July 6, 2016

One of the biggest challenges we’ve faced at Story Studio is how to design seamless experiences in positionally tracked VR. Positional tracking is one of the core, and most important, features of Rift. It’s the system by which your physical movements in the real world are mapped to your avatar’s movements in virtual reality. This paradigm has great immersive power. It connects your body directly with the virtual world for more intuitive and meaningful experiences.

However, to the experienced designer, this paradigm also represents a further loss of control over the camera, even beyond what we face in video games. You can move where you would in reality and, if that movement is mapped into your virtual world, it can easily move you within the virtual geometry. Similarly, where you stand in your trackable space can affect your freedom of movement in that experience.

Before we get into specifics, it might be helpful to explore some of the basic features we’re working with in positionally tracked VR.

Tracking volumes and user positioning

Positionally tracked VR works because there is some form of tracking on your headset as it moves through the real world, feeding that movement into the virtual experience. On Rift, there’s an IR camera watching a constellation of IR LEDs on your headset. This system uses that data to determine where you are relative to the camera. The placement of the camera and the layout of your real-world space define the area in which you can be tracked.

Distinct from the tracking volume, this is how much actual open space you have available in your VR setup. It can be bigger than the tracking volume (ideal) or smaller (typical). The playable space can influence boundaries in the virtual world as well as walls, couches, etc.

Virtual camera movement

We’re currently sticking to a one-to-one mapping between physical movement and virtual movement. It would be convenient to use other forms of virtual locomotion, such as binding movement to a game controller or using user-controller teleports.

Indeed, we’ve seen a lot of fairly promising ways to implement locomotion in VR. But so far, either our most sensitive team members still get uncomfortable or the system lessens our feelings of presence in the experience. This is a huge area of active exploration in VR and we hope to directly explore and address it in future experiences. For now, we’re avoiding the problems and focusing on maximizing usability and presence.

The above features combine to create some interesting challenges even at the beginning of an experience.

Starting an experience

When your experience starts, where will you arrive in the virtual world? It turns out that this is a much more interesting question than we expected. In fact, we unexpectedly spent several weeks on this simple scenario alone.

In film or games this is really a non-issue. We determine where and how we would first like you to enter our world and then we just put you there. We’re looking for a good establishing shot that gives you enough context to understand what this experience or world is all about. We want the same thing in a positionally tracked VR experience, it’s just a little harder to come by.

To understand why this is an issue and what it means for positionally tracked VR in general, let’s step through the various things we’ve tried before arriving at our current solution.

Step one: spawn the player at a point

In your world, you designate a point as the location where you arrive and the experience begins. Conceptually, we generally know where we want you to be and which direction we want you to face. So let’s just place you exactly there. This is straightforward in concept, but it turns out that success with this solution is directly proportional to how close you are to the center of your tracking volume.

In the image below, the dotted line represents the extents of the tracking volume and the solid line represents the extents of our virtual room. The blue circle is where your are relative to both volumes.

Figure 1: In positionally tracked VR, a user (blue) arrives where they need to be in your virtual room only if they are standing in the center of their tracking volume.

In the ideal scenario (above), you’d begin the experience while standing in the center of your tracking volume. You can then move equally in all directions before “hitting” the boundary of your tracking volume and losing the ability to move in the virtual space. You’ll have maximum freedom to explore the space where you’ve spawned.

In the worst case (below) you’ll begin the experience while standing in a corner of your tracking volume. Bound by this tracking volume in the real world, you’ll only be able to move away from those boundaries. In this worst case, you’ll be restricted from accessing 75% of the virtual space. Further, most of your tracking volume will map to places in the virtual world where you’re probably not “supposed” to be.

Figure 2: Worst-case scenario of just placing a user into a virtual room without regard for their positioning in the real world. If they are in the corner of the tracking volume, most of the virtual space will be unreachable.

Step two: align the real and virtual spaces

To fix the worst of the above problems we want to align the virtual space as best as we can within your tracking volume. This would prevent awkward mismatches between where you can or want to go and where the tracking volume will actually allow you to go.

This is actually fairly easy to accomplish and both Unreal Engine and Unity will do this by default. The Oculus SDK will report to you the offset from the center of your tracking volume. If you just ensure that the camera is always placed that same offset away from your “spawn point” then the virtual and real spaces will be aligned. The amount by which you’re offset from the center of the tracking volume then matches by how much you’re offset from the ideal start position.

In this way, regardless of where you’re standing in their respective tracking volumes, you’ll have equal access to explore your virtual space (below). Each has the centers of exploration locked to the experience’s spawn position. You can explore equally well around that point in all directions.

Figure 3: If you align the tracking volume with your virtual space, all arriving users to your virtual room will have full access to the space. However, though this fixes accessibility, control of virtual framing is completely lost.

This fixes our problems with inaccessible areas of our set, but we lose two of the most important features of a good establishing shot: positioning and gaze direction. In order to solve tracking issues we’ve lost control of framing. Our situation also gets a little worse than that.

Our experiences are dense with game logic triggers and virtual geometry. Because of our new solution, users may arrive in VR within a trigger or piece of geometry (below).

Figure 4: Alignment of virtual space with the tracking volume is further complicated in the more typical situation where the virtual room is not empty. Sad users may arrive inside of geometry or game-logic triggers (colored areas).

If you have a big enough tracking volume, or if the arrival room is small compared to the tracking volume, you can even spawn outside the walls of the experience.

Step three: guiding the user

To address framing and still preserve our necessary alignment of the virtual space with the tracking volume we added an antechamber to our experience. The purpose of the antechamber is to move you into a position from which you’ll have good framing for the actual experience.

This antechamber we’ve been using is simple in concept. After entering VR, you arrive in a mostly empty room. The room is dark but for a small circle of the ground that is illuminated by a spotlight. Near the spotlight is a podium. If you are a bit away from the podium, it reads “COME HERE”. Once you get close, or if you start close, it reads “LOOK HERE”. Once you get close enough and look at the podium, it fades out and we teleport you into the first room of the main experience.

The trick here is that the podium placement coincides with our ideal spawn positioning. Once you’ve moved to the podium and looked down at it, you’ll be almost exactly at the spawn point and facing a known direction. At this point, transitioning into our real first level is safe because we know exactly where the you are within your tracking volume and we can be confident that you’ll begin your experience in a good location with good framing (below).

Figure 5: If we align the user with a piece of geometry (podium) in a calibration room (left), then we can make sure they will arrive with proper framing in the experience (right).

Choreographing the pilot (advanced podiums)

Transitioning between environments within an experience comes with all the same problems as the initial user spawn: inaccessible playspace, arrival within geometry, or bad framing. The above concepts of reality management can be used more broadly to help ensure graceful transitions between rooms.

For example, if you want to ensure that a subsequent room is within the tracking volume, you can place a podium-like trigger to control player positioning. Perhaps you peer through a portal and doing so triggers the transition to the next level.

Figure 6: “Podiums” do not need to be aligned with the center of the tracking volume. If we want the user to arrive in a specific location (right), then we can place a podium in that location in the previous room (left).

In other situations it may be possible to develop “invisible” podiums. Perhaps geometry in one level can constrain you in such a way that you’re in a good position for the following level (below).

Figure 7: In addition to explicit calibration geometry, we can control user positioning by considering how sets are designed in “adjacent” rooms.

Interestingly, in these situations, we’re finding that sets within the experience affect each other in important ways. The layout of one room will constrain the possible layouts of the following room and vice versa. Like managing viewer sightlines in film, it seems like we may need to manage player positioning in VR.


In virtual reality we talk a lot about transporting people to other realms. It’s important to keep in mind that while their vision and presence may be there, their body remains soundly back in reality. That truth is what links positionally tracked VR experiences very strongly to reality.

We’re already finding many similar types of challenges with how reality affects our experience design. It’s helpful to think of level layouts as set designs on a theater stage. In theater, we need to be aware of where our “actor” will or should be standing to allow graceful set transitions. As in theater, temporally adjacent sets relate to each other spatially. The theater stage and the tracking volume are both real-world spaces that obey the same real world physics. Our job as VR set designers is to consider how those physical restrictions affect our play spaces.