The Tools We Use (as of August 2015)
Oculus Story Studio Blog
|
Posted by Max Plank
|
August 14, 2015
|
Share
This post is targeted towards technical artists with an understanding of visual effects, computer animation, or games making and the software used.

It’s exciting to see virtual reality content creation emerging in 2015. There are many off-the-shelf visual effects and game making tools for creators that are professional grade, user-friendly, relatively cheap, and applicable to VR.

This landscape didn’t exist for people creating computer animated stories back in the 1980s — they had to build custom hardware and software. The cost of entry into computer animation was hiring a large computer graphics team before even thinking about animators, artists and a director. Today, we need less engineering resources and can focus on building a team around content making.

An overview of Oculus Story Studio software stack

As for the tools we use here at Story Studio, let’s start with hardware. All team members author on a Falcon Northwest Talon V desktop with 32 GB RAM, an Intel i7 processor, and a single NVIDIA 980 graphics card with Windows 7 as the OS. Audio is mixed using a Beyerdynamic DT 770 Pro or an Audio-Technica ATH-M50x, which approximates audio quality on the Oculus Crescent Bay headset.

For cutting rough story boarding and scratch audio together we’ve used a combination of Adobe Photoshop CC, Adobe Flash Professional CC and Adobe Premiere Pro CC to produce mp4 files. We’ve started working with Unity 4 as a way to prototype 3D storyboards, an interesting technical problem and one for which we’d like to see more solutions from the VR community.

To build our architecture, props and character rigs, we use Maya 2016. We started using Maya LT early in the “Lost” project but found that it was too limited, especially in terms of python scripting capability. We have started using Mudbox for our set sculpting software and Zbrush as our character sculpting software.

Henry’s Rig in Maya 2016

For animation, we use Maya 2016 and leverage referencing of our character rigs and sets. We create an individual Maya scene for each “story beat”, a concept similar to how “shots” are used in visual fx. A “story beat” is a logical acting moment that has the right level of granularity for a single animator to own. Since many story beats need to seamlessly blend from one to the next, part of our polish process is to match poses on the ins and outs of those beats.

For generating complex animating geometry, like smoke, fire and cloth simulation, we have started using Houdini FX with its Maya Houdini Engine integration.

From Maya, we export all of our assets and animation into FBX files using a custom script. The script helps enforce convention, simplifies process for animators, and ensures that only the geometry or joints we care about are exported. We export each individual actor’s story beat animation as its own FBX asset to be imported into our real-time engine. This part of our pipeline is still evolving.

Animating Henry in a “story beat” in Maya 2016

For our texture painting, we’ve been happy with importing our FBX files into Substance Painter and Mudbox, painting material sets of tiled 2k by 2k TGA textures to import into our PBR shaded materials. We’ve also used Quixel NDO as a normal map painting tool.

For our real-time engine, Story Studio uses Unreal Engine 4. UE4 is where we assemble our experience, adding shading, lighting, animated effects, audio, set dressing, and any interaction-driven programming. To manage multiple team members working together simultaneously, we setup our project so the persistent level has an always-load sub-level for every story beat. Every story beat sub-level has its own Matinee Actor and the set of Actors unique to that beat, including a unique copy of any characters.

We’ve written a small plugin called Stage Manager which controls when each Matinee Actor plays and which Actors should be visible depending on which story beat is cued. We continue to evolve Stage Manager and hope to release it as a UE4 plugin when the evolution feels stable.

We author audio sound fx and music in Wwise and use the Oculus Audio SDK to spatialize ambient and actor driven sound. Audiokinetic’s integrated solution for UE4 makes this easy.

From UE4, we package a Win64 executable in the same way a game is produced by UE4. It’s this executable that we launch when showing “Lost” and “Henry”.

Authoring the Henry experience in Unreal Engine 4

Our backbone storage and sharing solution is supported by Dropbox, which is used for team sharing and cloud based backup. We use Perforce for our source control system. To avoid file access conflicts while authoring, we have a windows task that launches Dropbox at around 9 pm and kills it around 8 am (before our work day). This way, any work on a team member’s machine that is not checked into Perforce is guaranteed to be backed up every evening. We’ve had several cases where a team member accidentally deleted work before checking in. We were able to recover the last snapshot from Dropbox and save work that would have otherwise been lost. Furthermore, other team members can easily access files or changes yet to be checked in when that team member needs help or is away.

We’re actively considering options for production tracking software but have gotten by with spreadsheets and the web tool Trello.

On the whole, we do not have a lot of custom code in our pipeline. This may change in the future, but we are trying to use out-of-the-box tools as much as possible and show the world what anyone can do with commercially available software. If we do write custom code, we plan to openly discuss what we build, and if resources allow it, share it with the community.

Although we are spoiled on choice when it comes to software, the authoring workflows could greatly benefit from having editing capabilities while in VR. We have especially found that animation, set dressing, and lighting – very camera dependent disciplines in feature animation – are different enough in VR that it would be much more efficient to iterate in the target medium. We’ve seen over and over again that we can make a scene look great in the windowed editor but when that work is integrated into VR, we find that we would have done things differently if we were authoring while feeling present. We’re looking forward to having more VR authoring tools to speed up iteration and are hoping to push inspired tool builders in this direction.

There are many unsolved and interesting technical problems in VR content creation but we’ve found that the fundamental tools we need to get started already exist. It’s the creative problems that have a thinner foundation, and it’s the creative solutions that are needed to evolve the art of immersive story telling.