Sunday, December 30, 2012

Snow Globe Render

This year, I designed and created the cover image for the Penn Computer Graphics holiday card. I rendered the image in my renderer, Photorealizer:


The render features, among other things, global illumination accomplished with Monte Carlo path tracing, path-traced subsurface scattering, a 3D procedural wood texture, reflection from and transmission through rough surfaces, correct behavior at the glass–water (and other) interfaces, an HDR environment map, a spherical light, depth of field, anti-aliasing using a Gaussian reconstruction filter, and an S-shaped transfer curve.

Instead of using neutral white lights like I've often done in the past, I lit this scene with complimentary blue and yellow lights (glacier HDR environment map and small, bright spherical light respectively). This gives the image a more interesting and varied appearance, while keeping the light fairly neutral on the whole. When I started working on the lighting, I started with just the environment map and the image appeared far too blue. Instead of zeroing out the saturation of the environment map or adjusting the white balance of the final image, I decided to add the yellow spherical light to balance it out (inspired by the stage lighting course I took this past semester).

I spent some time tweaking the look of the snowflakes—the shape, the material, and the distribution. I ended up settling on the disc shape, which is actually a squished sphere (not a polygonal object). All of the snowflakes are instances of that same squished sphere. For the material, I made the snowflake diffusely reflect half of the incident light, and diffusely transmit the other half (in other words, a constant BSDF of 1/(2π)). This gives a soft, bright, translucent appearance, while still being very efficient to render (compared to subsurface scattering).

I made some different versions of the image to show how certain features affect the look of the final image:

Half-size version, for comparing to the images below.

White light version.

Display-linear version.

Opaque perfectly diffuse version
(statue, snowflakes, and tablecloth).

For best results, the model needed to be very clean, exact, and physically accurate. I chose to model it using Houdini. I think Houdini is a very well-made piece of software. I really like its node paradigm, procedural nature, and clean user interface. I like having the ability to set parameters using precise numbers, or drive them with scripts, and having the ability to go back and modify parameters in any part of the network at any time.

In addition to using Houdini, I created the LOVE statue shape in Illustrator, and I procedurally added the snowflakes in Photorealizer.

Here are screenshots of the Houdini network and the resulting model:

Houdini network.

Houdini model.

Friday, December 28, 2012

Photon Mapping with HDR Environment Maps

I gave each light source the ability to emit photons, including HDR environment maps, suns, spherical lights, and rectangular lights.

I came up with a way to emit photons from an (infinitely far away) HDR environment map, weight them correctly, and distribute them properly over the entire scene. Below are a few shots of a scene I used to test my system. In this scene, a glass sphere is hovering slightly above a pedestal.

The first two shots below use the Grace Cathedral environment map. This environment map has tiny, bright lights that contribute greatly to the image yet are hardly ever hit when using naive backwards path tracing. I had already implemented HDR environment map importance sampling for direct illumination, but only after implementing photon mapping can I render caustics efficiently as well. My photon mapping system uses the existing importance sampling to select photon emission locations.

Photon mapped indirect illumination, plus direct illumination (using environment map importance sampling).

No indirect illumination (except for ideal specular reflection and refraction of camera rays). Same shot as above except with photon mapping turned off.

The next two shots use the Pisa environment map.

Photon mapped indirect illumination, plus direct illumination.

Pure path tracing. Identical appearance to the image above (except for noise and imperceptible photon mapping bias).

HDR Environment Map Improvements

I made some improvements to my HDR bitmap environment map system to make it more robust, fix a couple little bugs. and make it faster. For testing, I made an 8x4 pixel equirectangular environment map in OpenEXR format, then I used my equirectangular camera to render some pictures of an object using the environment map as a light source. This way, I was able to see the entire environment, what my linear interpolation smoothing was doing, and how the object was being lit.

An enlarged PNG version of the 8x4 pixel OpenEXR environment map.

Here are the final results of my tests, after I made all of the improvements and fixes:

No smoothing. BRDF-based distribution path tracing.

No smoothing. Direct illumination using environment map importance sampling.

Linear interpolation. BRDF-based distribution path tracing.

Linear interpolation. Direct illumination using environment map importance sampling.

You might notice some subtle colored strips on the edges of these images. That's due to anti-aliasing and jittering (of the output image, not the environment map) which, when combined with the equirectangular mapping, cause some samples to end up on the other edges.

Friday, December 21, 2012

Pool Caustics

Here are some renders of the pool scene with a much smaller and brighter light, resulting in cool caustics. I rendered these images with photon mapping—regular path tracing would have taken forever to converge because the light source would have been hit very infrequently (it can't be sampled directly through the water surface). I'm planning to implement photon emission for all lights soon, including the sun and HDR environment maps.

Pool caustics.

Previously, I was not using shading normals for photon tracing, which caused distracting patterns in the caustics (see image below). To fix this (see image above), I made photon tracing use shading normals. For now, I just throw away photons that end up on the wrong side of the surface.

No shading normals.

Before rendering the images above, I was using regular grid tessellation of the water surface, which made the unwanted patterns even worse (see image below). To improve this, I made the grid higher resolution and then reduced the mesh, with triangles distributed to best approximate the surface and give the mesh a more organic appearance (see image above).

When the water surface was a displaced regular grid.

When the water was calm (see image below), the grid patterns disappeared. This helped let me know that the patterns were indeed caused by the tessellation of the water surface, as opposed to a random number issue or something else.

Calm water. Same exposure and everything else.

Thursday, December 20, 2012

Latitude Longitude Shot

I added the new cameras from my sky renderer to Photorealizer, and rendered a new picture of the pool scene using the latitude longitude camera (as always, click the image to view it at full size):

The swimming pool scene shot with my new latitude longitude camera.

(The little nick in the corner near the ladder is a tiny geometry problem, not a renderer problem.)

After saving that image in high dynamic range OpenEXR format, I used it as a latitude longitude environment map to render this image:

Water droplets lit by the high dynamic range version of the above image.

For the PNG version of the latitude longitude shot at the top, I had Photorealizer apply an S-shaped transfer curve to increase the contrast and saturation. Here's what it would have looked like without that:

Display linear.

Pretty washed out.

And here's an earlier version of the render, with shading normals disabled, which resulted in some ugly tessellation artifacts on the water surface:

No shading normals.

Modifying the normal can cause problems such as incident, reflected, or refracted directions ending up on the wrong side of the surface. These types of things can cause light leakage through surfaces or black spots. To avoid problems like these, I came up with a way to strategically modify the problematic directions.

Thanks for looking at my blog!

Wednesday, December 19, 2012

Transmitted Radiance

Intuitively, when light is refracted, the same amount of light is squeezed into a smaller solid angle, or spread into a bigger one, which results in an increase or decrease respectively of radiance along each ray (since radiance is flux per unit projected area per unit solid angle). As worded on Wikipedia, "the radiance divided by the index of refraction squared is invariant in geometric optics." For light traveling from medium i (incident) into medium t (transmitted), this invariant can be written as Lt / ηt2 = Li / ηi2 (where L is radiance and η is index of refraction), which implies that Lt = Li * (ηt / ηi)2. Thus, in a ray tracer, when a ray is refracted across an interface, we need to scale the radiance that it is carrying by (ηt / ηi)2.

An image from Eric Veach's thesis.

I had never thought about this until I read about it recently in Eric Veach's thesis. In section 5.2, Veach explains why BSDFs are non-symmetric when refraction is involved, and how to correct for this (see the thesis for a full explanation, and for a lot more great stuff). After reading Veach's explanation, I implemented a correction factor in my renderer. You can see some before and after images below.

For photon mapping, the correction factor is not needed, because photons carry flux (or power) instead of radiance, and because computing irradiance based on a photon map involves computing the photon density, which is naturally higher where the same number of photons have been squeezed into a smaller area.

In cases where light enters and then later exits a medium (or vice versa), the multipliers cancel each other out, which hides the problem. But in other cases, the problem can be very apparent. In the underwater images below, you can see that the path tracing and photon mapping versions do not match until the transmitted radiance fix is made. The path tracing image without the fix appears much too dark. If I had instead put the light under the water and taken a picture on the air side (the opposite situation), the scene would have appeared too bright rather than too dark.

Path tracing before transmitted radiance fix.

Photon mapping before transmitted radiance fix.

Path tracing after transmitted radiance fix.

Photon mapping after transmitted radiance fix.

There are some artifacts in these renders due to the lack of shading normals (Phong shading; smoothing by interpolating vertex normals across faces). My photon mapping implementation doesn't currently support shading normals, so I disabled them for these renders in order to make the best comparisons possible. Shading normals can also add bias to renders, but they can also make images look much nicer when used appropriately and carefully.

There are also some artifacts due to problems with the scene, which I've since fixed. (In my next post I'll post a shot of the scene from above the water, cleaned up, and with shading normals.) I found the ladder on SketchUp, and then modified it in Maya (but I probably should have just made one from scratch for maximum control and to make it as clean as possible), and modeled the rest of the scene myself, mostly in Houdini. Houdini is nice for certain kinds of procedural modeling, and it makes it really easy to go back and modify anything.

Usually (depending on the settings), upon the first diffuse (or glossy) bounce, I shoot multiple rays. If the ray from the camera encounters a refractive surface before the first diffuse bounce, I split the ray, and then split the number of rays allocated to the first diffuse bounce proportionally between the reflected and refracted paths (deeper interactions with refractive surfaces use Russian Roulette to select between reflection and refraction to avoid excessive branching). As I was making these renders I caught and fixed a importance sampling bug where, after total internal reflection, the number of rays allocated to both paths was just one, instead of a representative fraction of the total. Since there is a lot of total internal reflection visible on the bottom of the water surface in this shot, this bug was causing the reflection of the pool interior to converge much more slowly than the direct view of the pool interior.

Tuesday, December 18, 2012

Architecture

To give you an idea of how the code behind Photorealizer is organized, here's a list of all of the C++ classes that I've written (updated January 25, 2013), with subclasses nested below their superclass. (A few of these things are actually class templates or namespaces with functions in them, but they might as well be classes.)


AdaptiveSampler
AdaptiveSamplerSample
Aperture
    PinholeAperture
    WideAperture
AxisAlignedBoundingBox
BasicMath
Bitmap

    LDRBitmap
    HDRBitmap
BitmapTextureMapData

BoundingSphere
BSDF

    CookTorranceBRDF
    LambertBRDF
    LambertBTDF

    ModifiedPhongBRDF
    OrenNayarBRDF
    SpecularMicrofacetBSDF 
BSDFSample
BSDFWeight
BVH
BVHNode
Camera

    AngularFisheyeCamera
    LatitudeLongitudeCamera 
    RegularCamera
CameraSample
Color
ColorProfile

    PowerLawColorProfile
    SRGBColorProfile
ColorRamp
ColorTransform

    BlackAndWhiteColorTransform
    BlackLevelColorTransform

    ColorBalanceColorTransform
    CyanColorTransform
    RedColorTransform 
    RollOffContrastColorTransform
    SinusoidalContrastColorTransform
CustomOutput
Distribution1D
Distribution1DSample
Distribution2D
Distribution2DSample
FileNameUtils
FinalGatheringPoint
FinalGatheringSample
FinalGatheringSampler
Fresnel
HaltonSequence
ImprovedPerlinNoise
ImprovedPerlinNoiseFunctions
IndexOfRefraction
Interpolator
Intersection
IntersectionExtras
IrradianceSample
KDTree
KDTreeNode
KDTreePoint
Light

    DirectionalLight
    EnvironmentMap
        CIEOvercastSky
        ColorEnvironmentMap
        GradientEnvironmentMap
        HDREnvironmentMap
            FisheyeHDREnvironmentMap
            EquirectangularHDREnvironmentMap 
    RectangularAreaLight
    SphericalLight
    Sun
LightIntersection
LightPhoton
LightSample
LinearAlgebra
Main
MainWindow
Medium
MersenneTwister
Motion
MultipleScattering
MultipleScatteringOctree
MultipleScatteringOctreeNode
OpenEXRImage
OpenEXRStuff
PathTracer
PhaseFunction

    HenyeyGreensteinPhaseFunction
    IsotropicPhaseFunction
Photon
    IrradiancePhoton
PNGWriter
PointMaterial

    Material
Rasterizer
Ray

    AxisAlignedBoundingBoxRay
ReconstructionFilter

    BoxFilter
    GaussianFilter
    TriangleFilter
ReflectionRefraction
RenderDispatcher
SceneObject
    Primitive
        AnyPolygon
        Cube
        Cylinder
        HeightField
        InfiniteGroundPlane 
        Metaballs
        Sphere
    TransformableSceneObject
        SceneObjectContainer
            Model
            Pedestal
            Scene
                HardCodedScene
SellmeierCoefficients
Settings

    HardCodedSettings
Shutter
    InstantaneousShutter
SpatialGridHashTable

    SimpleIntersection
SpatialGridHashTablePoint
SpecularMicrofacetDistribution

    BeckmanDistribution
    GGXDistribution
Stopwatch
StringUtils
Test
TextureMap
    TextureMap2D

        BitmapTextureMap
        CheckeredTexture
     TextureMap3D
        MarbleTexture
        WoodTexture
Triangulator
Vertex
    VertexWithNormal
    VertexWithNormalAndUV
    VertexWithUV
Volumetric
VolumetricIntersection
Voxel
VoxelBuffer



I wrote all of that code (and designed the architecture) from scratch, except for the following minor pieces: a function for efficient ray–box intersection (I had previously written my own, but it wasn't as efficient), improved Perlin noise (I converted the Java code to C++; I've implemented Perlin noise before, but not the improved version), a few of the functions in my custom linear algebra library (I previously used a basic linear algebra library found here), a basic UI based on this Qt image viewer example.

In addition to the classes listed above, I also utilize three libraries for loading and saving bitmap images (OpenEXR, stb_image, and libpng), as well as Qt for the GUI (used to use Cocoa on Mac), OpenMP for multithreading (used to use Grand Central Dispatch on Mac), and of course C++ (including some new TR1 and C++11 features).

I try to write code that is clean, descriptive, and readable. Here are a couple relevant quotes that I like:

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." ~Martin Fowler

"One of the miseries of life is that everybody names things a little bit wrong, and so it makes everything a little harder to understand in the world than it would be if it were named differently." ~Richard Feynman