Saturday, March 9, 2013

Better Dispersion

Photorealizer now has high quality, physically accurate, spectral dispersion.

I improved the dispersion system in Photorealizer using some ideas that I wrote about in two old posts and a comment (here, here, and here). Instead of just tracing rays for RGB primaries, I now trace rays across the entire visible spectrum. Most parts of Photorealizer still use RGB color, but when a dispersive material is encountered, I choose a single wavelength from the visible spectrum, determine the index of refraction using the Sellmeier equation, and then continue path tracing with that wavelength. Then I convert the results to sRGB primaries using the spectral responsivity curves of the camera's RGB sensor. I derived these spectral responsivity curves from the new physiologically relevant CIE XYZ color matching functions (which I wrote about in this post on my sky renderer blog), and normalized them such that pure white light (containing equal amounts of all the wavelengths of the visible spectrum) is transformed into pure white in the sRGB color space (1,1,1) (i.e., I made each curve integrate to one). The spectral responsivity curves can take on negative values, and I don't clip negative color components until a bitmap image is output, which allows colors outside the sRGB gamut to be represented using the sRGB primaries. While these negative colors can't be displayed on a typical display, they are still useful and necessary for accurate color math. For example, adding a positive color to a negative one can result in a positive color in the displayable range, but if the negative color had been clipped to zero the result of the addition would be different and wrong.

I rendered new images of diamonds using my improved dispersion system. These images are 1600x900. You can click on an image to enlarge it, or you can open an image in a new tab or download it to ensure that you are viewing it at full size.

Diamonds with dispersion. The scene is completely grayscale; all of the colors in the image come from dispersion. Click to view at full size and see all of the colorful details.

Diamonds without dispersion.

Diamonds with dispersion but without transmission.

Highly saturated version of the image above, to clearly show that dispersion affects reflection, not just refraction (higher frequency (i.e., bluer) → higher index of refraction → higher reflection coefficient).

An old dispersion render (from this post) in which I traced only 3 specific wavelengths: 1 for red, 1 for green, and 1 for blue. (There are some other differences between this and the new render as well.)

Subsurface Scattering Improvements

I recently made some significant improvements and additions to Photorealizer's subsurface scattering (SSS) capabilities. In particular, I implemented the following:

• separate scattering coefficients for R, G, and B channels (for both path-traced and diffusion-based SSS)
• the Better Dipole model (for more accurate diffusion-based SSS)
• basic single scattering (for use with diffusion-based SSS)
• several bug fixes (for diffusion-based SSS)

Below are some new Photorealizer renders showing off the new features. Click the images to view them at full size and do back-to-back comparisons. The strong transfer curve I've applied might slightly exaggerate the differences between the path tracing and diffusion versions.

Path tracing.

Diffusion-based multiple scattering plus single scattering.

Diffusion-based multiple scattering only.

Diffuse BRDF approximation to multiple scattering.

Diffuse BRDF approximation plus single scattering.

Single scattering only.

No transmission at all.

Miscellaneous Details

Below are some miscellaneous details about my SSS systems.

My Monte Carlo path tracing SSS system is unbiased and physically-based. I use the Henyey–Greenstein phase function for anisotropic scattering. I currently support only homogeneous media, although I could use unbiased distance sampling (which I implemented in my sky renderer) for heterogenous media. In the updated system, when the R, G, and B scattering coefficients differ from one another I trace separate rays for the separate channels.

As I've described in previous posts, my diffusion-based SSS system uses a precomputed, hierarchical point cloud of irradiance samples. The system works with multiple objects and even instancing—I create a separate point cloud for each object that is marked for diffusion-based multiple scattering.

To compute the albedo for the diffuse BRDF approximation for the Better Dipole model, I used numerical quadrature, rather than trying to do analytical integration. This only needs to be done once for each material, so it has no noticeable impact render times.

Several of the resources that I've been using for diffusion-based SSS have lots of ambiguities and some errors. By cross-referencing various sources and closely examining the derivations of the models, I was able to identify several things that needed to be fixed in my implementation. I did some relatively extensive testing, and I believe that all of the major pieces of my implementation are now correct. As a result, my renders are now much more physically-accurate.

My diffusion and path tracing renders still don't match perfectly due to the limitations and inaccuracies of the dipole diffusion approximation. The dipole diffusion approximation assumes that the object is a semi-infinite, homogeneous slab, and does not handle thin or curvy geometry well. In the model, light is forced to a depth of one mean free path before scattering, which is especially inaccurate near the source, blurring away fine details and causing low albedo colors to be absorbed too much. Some light goes all the way through the object, and this is not handled properly. Further inaccuracies are inevitable due to other assumptions that the model makes and approximations made in the derivation. Path tracing is still the best option when you want the most accurate results—it captures all of the subtleties automatically, and the resulting images have noticeably more depth.

Test Renders

Below are some test renders. The scene is a top-down view of a large slab of material. The material scatters and absorbs all wavelengths equally. There is a magenta environment map surrounding the scene (which is why the material looks magenta). In the upper right, there is a green spherical emitter. The black line is a thin wall that prevents light from the sphere from passing to the other side except by way of SSS. In other words, all of the green to the right of the wall is a result of subsurface scattering.

First 4 images:
Relative IOR: 1.62
Phase function: isotropic
Single-scattering albedo: 0.999

Path tracing.

Diffusion-based multiple scattering plus single scattering.

Diffusion-based multiple scattering only.

Diffuse BRDF approximation to multiple scattering.

Next 4 images:
Relative IOR: 1
Phase function: isotropic
Single-scattering albedo: 0.999

Path tracing.

Diffusion-based multiple scattering plus single scattering.

Diffusion-based multiple scattering only.

Diffuse BRDF approximation to multiple scattering.

Final 4 images:
Relative IOR: 1
Phase function: isotropic
Single-scattering albedo: 0.667

Path tracing.

Diffusion-based multiple scattering plus single scattering.

Diffusion-based multiple scattering only.

Diffuse BRDF approximation to multiple scattering.