Colorizing Spaces: Working with Color in Story Studio’s Henry

Oculus Story Studio Blog
|
Posted by Inigo Quilez
|
March 2, 2016
|
Share

This post is targeted towards technical artists, with an understanding of visual effects, computer graphics, and programming.

Making Henry was a massive undertaking, filled with dozens of creative and technical challenges. In this blog, we’ll take a deep dive into our process to describe how we solved a specific technical problem.

Our challenge was to make sure Henry looked like an animated feature in the medium of VR. Our artists developed the look and feel of the film through paintings and created some gorgeous art. We wanted to preserve that richness in the film.

Henry’s living room, the space where our story happens, could only be lit with indirect lighting. Initially, this was helpful because it meant we wouldn’t need to render high-quality shadow maps. However, illuminating an entire room with indirect lighting was going to be difficult if we wanted to avoid the flat look common in most real-time engine games.

Color key by Dice Tsutsumi and Robert Kondo

Color Keys

Our color keys indicated a strong, beautiful, and complex first bounce of light, complemented by a rich, warm secondary bounce. Many of these light interactions – like the warm colors in the wood by the kitchen door – were motivated by physics. But the colors in our artists’ paintings were exaggerated and beautified interpretations of real life observations – not actually real. Consequently, any physics-based lighting system wouldn’t have been able to fully achieve our artistic vision.

Luckily, the situation wasn’t that bad. The fantastic global illumination tools in UE4 got us halfway there without technical difficulties. However, since the underlying algorithms were doing the “correct” thing (simulating real light transport through the scene) we got realistic images which seemed dull when compared the original vision. They didn’t express the cozy, magical mood of the movie.

To correct this, we added a large number of local point lights to colorize the scene. This was labor-intensive and still failed to capture our artistic aspirations, so we pivoted to a more painterly approach.

We selectively picked volumetric regions in space where a given color treatment would be applied. This was similar to what we did with the point lights, but instead of adding tinted illumination, we manipulated color on the lit pixel values within the volume. Essentially, we developed custom technology to solve the problem and fit within a slim render budget. Something we love doing!

Our Approach

As we considered using volumetric post-processing manipulation, new opportunities for interesting effects opened up. With traditional films and photography, the subject can be separated from the background by using depth of field, but in VR, focus is not a strong depth conveyor. There is less need to use a shallow depth of field to create separation. In the case of our movie, we wanted the viewers to focus on Henry. Adding the volumetric participating media to the colorizers allowed us to soften the detail in some areas by giving them a misty treatment.

As we explored volumetric colorizers more, we found interesting new opportunities to use them. For example, they created the feel of morning light around the windows by faking a scattering effect. We also used it to create glow around balloons and magic particles.

Volumetric colorizers to make the magic blue glow

Most of our render budget was already allocated to the animation of Henry – his animating rig, shading his fur, and eyes – as well as shadows on the set. So whatever technique we settled on, its runtime execution would have to be super efficient.

We chose a full-screen shader – a single, post-process material – to take care of the colorization solution. This method required minimal intrusion on the engine and had major performance benefits. For example: Being able to read the color and depth buffer once, compute the coloring effect, and write the result back to the frame buffer avoided any memory burden. While having the colorizers chained together in a fullscreen quad was not the most modular strategy, it was efficient and simple to develop. It also helped that we were both implementer and artist, so we didn’t need to make it user friendly.

Naturally, the volumetric colorizers could be dynamic. For example, we attached volumetric colorizers to the glowing trail of animated magic dust to form beautiful specks trailing off the candles and balloons.

In terms of functionality, the colorizers came in two main flavors: surface and atmospheric. Surface colorizers affected the surfaces inside the volume based on a falloff from center, much like a light would do. Atmospheric colorizers acted as if there was a participating media in the volume, and the depth of the media, as seen by the viewer, determined the amount of colorization on the pixels within or behind the colorizer. That allowed us to achieve the glow and scattering effects described above.

An example of an atmospheric colorizer implemented on shadertoy.

Surface colorizers were used to create the fake bounce light volumes around the direct sunlight patches by the kitchen door and the table (which affect Henry as he walked past). Surface colorizers were also used to simulate the bounce light in the tree branches, as well as global darkening/brightening of the set and characters. The atmospheric colorizers were used to reduce contrast and separate the back parlor area from the living room. These colorizers were also used to create light scattering effects of the sun shown through Henry’s upper windows, and for the magic dust.

The ultimate solution consisted of a full screen pass with a set of hardcoded static and dynamic mathematical volumes, with some of them volumetrically integrated to drive simple color manipulators (implementation details are below).

A screen shot from within Henry’s story showing the effect of spatial colorizers.

The Mathematics and Implementation Details

The atmospheric colorizers were controlled by the accumulation of participating media within a bounding sphere. Implemented as a pixel shader, we wanted to make it fast, so we ultimately went with an analytical media integration approach.

The first step was to detect whether the current pixel overlapped with the spherical volume, and “early exit” if it didn’t. We did this by a simple ray-sphere intersection test. If an intersection happened, we wanted to know the entry and exit point of the ray with the sphere, so we could integrate the fog amount along the segment of the ray inside the sphere.

Once the amount of fog was computed, we could use it to drive the amount of colorization.

Atmospheric Integration

The raytrace is very simple, and we describe it here in the following:

For points in space x, a sphere with center sc and radius sr is defined by , and a ray originating at the camera position ro and going through our pixel in direction rd can be defined as , with t>0 for rays shooting forward and for an isotropic metric. The overlap and intersection of the pixel ray with the sphere can be solved by replacing x in the sphere equation, with the ray equation . Since , squaring both sides of the equation and expanding the result gives us , which is a quadratic in t with solution with and as long as .

Once we have the entry and exit points for the ray parametrized by t, we are almost ready to integrate the fog. Before that, we just have to account for the possibility that the sphere is completely behind the camera (t2<0.0) or completely hidden behind the depth buffer (t1>dbuffer). Then we have to clip the segment so that we only integrate from the camera position forward and no further than indicated by the depth buffer, which we can do by performing:

We can now integrate the fog along the segment. We chose a fog density function that peaks at the center of the sphere where the density is maximum (1.0) and decays quadratically until it reaches zero at the surface of the sphere.

This function is easily integrable analytically:

It might be convenient now to normalize the accumulated fog F such that it takes the value 1 in the extreme case of the ray going right through its center, all the way from its surface to the back side. In that geometric configuration we have c=0 and b=-r, so:

Therefore, the final expression, ready for implementation, is:

As a curiosity, note that when the sphere does not overlap with the camera or the scene, then:

These maths are almost ready for implementation. Before that, it is worth noting that some floating point precision can be gained by recasting the whole problem into the unit sphere (centered at the origin and with radius 1), in which case the final implementation is:

// Compute the amount of marched analytical volumetric
// fog with quadratic density fallof in a sphere

float computeFog( vec3  ro, vec3  rd,  // ray origin, ray direction
                vec3  sc, float sr,  // sphere center, sphere radius
                float dbuffer )
{
  // normalize the problem to the canonical sphere
  float ndbuffer = dbuffer / sr;
  vec3  rc = (ro - sc)/sr;

  // find intersection with sphere
  float b = dot(rd,rc);
  float c = dot(rc,rc) - 1.0f;
  float h = b*b - c;

  // not intersecting
  if( h<0.0f ) return 0.0f;

  h = sqrtf( h );
  float t1 = -b - h;
  float t2 = -b + h;

  // not visible (behind camera or behind ndbuffer)
  if( t2<0.0f || t1>ndbuffer ) return 0.0f;

  // clip integration segment from camera to ndbuffer
  t1 = max( t1, 0.0f );
  t2 = min( t2, ndbuffer );

  // analytical integration of an inverse squared density
  float i1 = -(c*t1 + b*t1*t1 + t1*t1*t1/3.0f);
  float i2 = -(c*t2 + b*t2*t2 + t2*t2*t2/3.0f);
  return (i2-i1)*(3.0f/4.0f);

}

The above was implemented in a “Custom” node in the UE4’s Material Editor. Here is a reference implementation running live: https://www.shadertoy.com/view/XljGDy

Color manipulation

Once the attenuation amount was calculated, we manipulated (or “graded”) the final lit color using the following algorithm. Given the following parameters: a 3-float bias (b), 3-float tint (t), 3-float gamma (g), and 1-float saturation value (s), the algorithm shaped the curve of color space with slope, shape, and offset controls. For an input color Ci and an output color Co, the transformation was:

Note that gamma is a 3-float vector too, so the red, green, and blue channels can be pushed or pulled independently, providing decoupled shaping controls over each of the color channels. The default values of b=(0,0,0), t=(1,1,1), g=(1,1,1), s=1 is the identity transformation of the colorspace. The gr() operator returns the grayscale version of a color, which consisted of a simple dot product in our case (you could also use luminosity for grayscale).

The color transformation was implemented with a few nodes in UE4 material editor, and encapsulated in a Material Function for ease of reuse:

With all these ingredients in place, we were able to get the artistic control we needed to make our short movie, Henry, a beautiful virtual reality cartoon.

– Inigo Quilez, Oculus Story Studio