Monday, April 30, 2012

Dispersion

Image from Wikimedia Commons.
One thing that's currently missing from Photorealizer (and many commercial and production renderers, too) is dispersion. Dispersion is what causes white light to split into its component spectrum through a prism, what causes rainbows, and what causes chromatic aberration when looking through a lens (which often appears as fringes of color at the edges of objects). (It's also one of the many things that could make my purple glass bunny render more realistic, which is what got me thinking about it.) Dispersion happens because the speed that light travels in a medium depends on the frequency of the light. In other words, the refractive index (the ratio of the speed of light in a vacuum to the speed of light in the material) depends on the frequency of the incident light (I'm saying frequency instead of wavelength here because the wavelength changes depending on the medium, while the frequency stays the same). (By the way, amazingly, the refractive index is the square root of the dielectric constant.) In particular, lower frequencies (longer wavelengths) result in lower refractive indexes. For example, the refractive index of glass is lower for red light than for blue light, so a higher proportion of the red light will be transmitted, and it won't be bent or refracted as much as it's transmitted.

Adding dispersion to Photorealizer isn't my top priority, but I have given it some thought. Here are some of my ideas for implementing dispersion in Photorealizer (or any other distribution ray tracer):

Propagate light through the scene using a spectrum of a number of wavelengths, rather than just RGB triplets. For speed, trace all of the wavelengths together as you might for RGB color. Then when you hit a refractive surface, importance sample from the distribution of wavelengths using inverse transform sampling, i.e. sampling the discrete CDF of the distribution. I have 1D and 2D discrete probability distributions in Photorealizer, which have come in handy for importance sampling HDR environment maps, lights, and BSDFs. Given a wavelength and the material properties, use the Sellmeier equation to compute the index of refraction of the material at that wavelength. Shoot a ray in that direction. Use standard Monte Carlo integration and Russian roulette to give this new sample the proper weight.

You could also simplify things a little at the cost of speed by always associating a single wavelength with each ray, instead of a spectrum.

At the end, the spectral radiance data needs to be converted into RGB data that can be displayed on the screen. To accomplish this, integrate the spectrum with the CIE XYZ color matching functions to convert the data to CIE XYZ tristimulus values. (I could also convert to a more biologically-based color space like LMS, using real cone response curves, but then I'd probably end up converting to CIE XYZ anyway.) From there, convert to sRGB primaries, apply gamma correction, and finally apply dithering and map each of the RGB components to 8 bit integers.

An alternate way of converting the spectral data to RBG would be to draw the wavelengths of your camera rays from the R, B, and B sensitivity curves of your camera or your eye in the first place. This would be an intuitive approach if you only want an RGB or LMS image in the end. But if you want a physically accurate breakdown of the final image over the entire visible spectrum, it would be easier to select samples uniformly over the visible spectrum (or even beyond the visible spectrum, say if you want to include fluorescence and other cool wavelength-dependent effects of light).

This technique touches on many fields: physics, probability, colorimetry, etc. The breadth of disciplines involved is one of the things I really like about rendering. Not only is exposure to different fields interesting, it is key for creative thinking and innovation.

Tuesday, April 24, 2012

Monte Carlo Subsurface Scattering

Monte Carlo subsurface scattering.
Last week I added Monte Carlo subsurface scattering to Photorealizer. I'll post more details for soon, but for now here's a quick and rough overview and some cool pictures.

I implemented the Henyey-Greenstein phase function for anisotropic scattering. I added volumetric texture support via ray marching. For now, I assume the scattering coefficient doesn't vary significantly by wavelength or in space, so I can trace all wavelengths together, and, in the case of volumetric textures, ray march to find the absorption after finding the next intersection or scattering point.

Monte Carlo subsurface scattering is very realistic and physically-based, which is why I like it. Monte Carlo subsurface scattering is very slow compared to approximation algorithms, but unlike most approximation algorithms it has no memory overhead and doesn't require a pre-pass. That said, it's actually not as slow as I expected, and could be fairly practical for rendering materials that aren't highly scattering.

I'll be able to extend this system to implement single scattering to go along with my diffusion-based multiple scattering. And I'll be able to use it as a reference, for visual validation of approximation algorithms.

All the images here were rendered in 720p resolution. Click them to view them at full size.

High scattering coefficient. Moderately forward scattering. Procedural volumetric texture. Because of the high scattering coefficient, most of the light doesn't get very far past the surface, and the look approaches that of a diffuse BRDF.

Scattering coefficient set to 1/4 of that in the first blue bunny image above, and other parameters held constant. As the average scattering distance increases, light on average propagates deeper into the material and more of it is absorbed, resulting in lower overall radiance and higher saturation. You can also see that more light is transmitted all the way through thin sections like the ears.

Scattering coefficient set to 1/64 of that in the first blue bunny image, and other parameters held constant. The trends observed in the previous image continue.

Compared to the first blue bunny image, this one uses a lower scattering coefficient (1/32 of the first blue bunny), as well as a lower absorption coefficient (1/8 of the first blue bunny), and a higher mean cosine of the scattering angle (more strongly forward scattering).
To me, while not intentional, the first two of these blue bunnies resemble toothpaste, but with higher index of refraction. The second one looked very much like that glittery kind of toothpaste when it was still very noisy, before it had converged very far.

Friday, April 20, 2012

First Images with Multiple Scattering

Early last week, I set aside a day and finished the core of my multiple scattering implementation, so I can now efficiently render translucent materials with subsurface scattering! With this I'll be able to render things like milk, tomatoes, grapes (I must be hungry), plastic, marble, and lots of other new things.

This is still a work in progress, with many related features and improvements left to implement (including single scattering, multicolored objects (I already support separate scattering parameters for the R, G, and B components), and a variety of little things), but I've structured everything with these things in mind, and now coding most of them should be relatively straightforward.

A lot went into this. Check out my previous couple posts for some background information, implementation details, and a list of resources I've used. To summarize, my multiple scattering implementation is primarily based on Jensen and Buhler's 2002 paper A Rapid Hierarchical Rendering Technique for Translucent Materials, with the help of many other resources as well (many of which I listed in a previous post). At some point, I'd like to post more implementation details and write about the specific challenges involved, such as designing a good architecture, storing separate point clouds for separate objects, making it compatible with other features like instancing and transformations, extending the model to multiple wavelengths, and identifying a couple errors and ambiguities in the paper.

Here's an image would be nearly impossible to render without a combination of approximate multiple scattering and environment map importance sampling:


That bunny has a perfectly smooth surface. I've also added support for rough surfaces:


Just as with my physically-based diffuse / specular model, what's reflected specularly from the surface is not available to take part in subsurface scattering, so the system conserves energy.

And for reference, here's a quick render of an opaque bunny (not exactly the same color) in the same place:


As you can tell, the scattering bunnies above are highly translucent compared to this one.

Thursday, April 19, 2012

Working towards Multiple Scattering

As I mentioned in my last post, I've been working on implementing multiple scattering using the approach described in A Rapid Hierarchical Technique for Translucent Materials.

As of a few weeks ago, I had done some necessary refactoring to support approximate multiple scattering, and I had written code to generate a uniform distribution of points on the surface of any object, compute the irradiance at each point, store the points in a custom octree, and traverse the octree.

Below are some pictures related to generating a distribution of equally-spaced points on the surface of an object. I made this work even when transformations are applied. The first image below is the original sphere, the second has been stretched horizontally, and the third has been scaled uniformly in all dimensions, with points generated with the same density in all cases. In the fourth image the density has been increased.





Below are images of a bunny as more and more points are added to approach an optimal distribution (more rejections allowed in my sampling routine):




I saved the renders below while working on correctly sampling the irradiance at sample points on the surface—the irradiance resulting from both direct and indirect illumination:




Final result.

Reference

Wednesday, April 18, 2012

Subsurface Scattering

Subsurface scattering is a key component to the appearance of many materials. Subsurface scattering happens when light enters a material at one point, scatters around while potentially being absorbed along the way, then exits the material at a different point. All dielectrics exhibit subsurface scattering to some extent; in fact, subsurface scattering is the main mechanism behind diffuse reflection. For highly opaque materials, or when the object is viewed from a substantial distance, a BRDF can be used to approximate this subsurface scattering. The problem is that BRDF models assume that light enters and exits at the same point on the surface of an object, which results in a hard and opaque appearance. Thus many soft, organic, translucent materials—for which light scatters a lot and travels a significant distance beneath the surface—simply cannot be rendered realistically using a BRDF.

I've been very interested in implementing subsurface scattering in my path tracer for quite some time. The main challenge with simulating subsurface scattering is doing so in a reasonable amount of time. Monte Carlo path tracing of opaque surfaces is slow enough already. With translucent materials, light can scatter hundreds or thousands of times beneath the surface before coming out again, and in some cases (e.g., milk) it's hardly absorbed at all along the way. Luckily you can get good results using approximations (transport theory is itself an approximation (of electromagnetic scattering theory), but I'm talking about more significant optimizations).

For the past few months, on and off, I've been thinking about and working toward subsurface scattering. In particular so far I've been working on a point-based, hierarchical, dipole diffusion approximation approach to multiple scattering (i.e., light that has scattered multiple times beneath the surface, as opposed to a single time). My implementation is mainly based on the paper A Rapid Hierarchical Technique for Translucent Materials. I'll post more about my progress soon.

Below are some of the resources I've been using to understand and implement subsurface scattering. I've edited this post to add resources that I hadn't yet used at the time of initial writing:

Monday, April 16, 2012

Refraction through Rough Surfaces

A ground glass version of the Purple Glass Bunny render I posted last week.

Over a year ago I implemented the Cook-Torrance microfacet model for rough specular reflection. Just a few weeks ago, I implemented a state-of-the-art microfacet model for rough specular reflection and transmission, based on the excellent 2007 paper Microfacet Models for Refraction through Rough Surfaces. It's a complete reflection/transmission BSDF, where the reflection part is a improved version of the original Cook-Torrance model, and the transmission part is based on much of the same theory and math. The new model includes high quality Smith shadowing and masking functions, a slightly modified form of the Beckman distribution, a brand new microfacet distribution called GGX designed to better fit real-life measurements of certain materials, and good importance sampling, all of which I implemented.

Below are some renders from the development process:

Blue ground glass using the GGX microfacet distribution.

Blue ground glass using the Beckman microfacet distribution.

Smoother version using the Beckman distribution.

New microfacet model.
Original Cook-Torrance microfacet model (it's slightly brighter).
Ground glass using the GGX distribution.

Another relatively smooth bunny.
Very smooth ball using new microfacet model for the black ball. Because the surface is so smooth and the specular lobe so tight, this render wouldn't have been feasible in a reasonable amount of time without high quality importance sampling for selecting BSDF samples.
Ideal specular version for comparison, to show that it closely matches the microfacet version above. The main differences (besides different run-times and levels of convergence) are that the microfacet version is not perfectly smooth, so it has slightly blurred reflections, and it's slightly darker due to shadowing and masking.
Another quick render, this time using the improved bunny model (without holes in the bottom) that I used for the big purple glass bunny render I posted recently. This is a brighter but lower quality version of the image at the top of this post.

A rough gold bunny.

Sunday, April 15, 2012

GPU Path Tracer

Karl Li and I are building a GPU path tracer in CUDA for CIS 565 at The University of Pennsylvania. Our goal is to make a physically-based unbiased path tracer that can render pretty pictures fast. And it's not just fast—it's interactive! Below are a couple images showing what it's capable of so far. You can follow our progress at http://gpupathtracer.blogspot.com/.



Friday, April 13, 2012

Purple Glass Bunny in 720p

As I did with my Teacup and Spoon image, I recently re-rendered my Purple Glass Bunny image in high quality 720p resolution. Unlike the Teacup and Spoon, the overall look of this one has significantly changed since last time, because the original bunny render had a few significant problems. First, the original purple glass bunny had a bug where the light stopped being attenuated after the first internal bounce, i.e., the purple glass changed to clear glass. Second, the original bunny model had a few giant holes in the bottom. I didn't even realize they were there until recently. I fixed this by finding a higher quality version of the bunny model online. Both of those problems detracted from the realism of the original image. This time around, I also made some tweaks to the composition and colors, and applied a subtle curve to raise the black level and increase contrast a tiny bit (all in Photorealizer; nothing in post). Here's the result:

New version of my Purple Glass Bunny render. Click to view full size!
And here's the old 480p version for comparison:

Last year's Purple Glass Bunny.

Thursday, April 12, 2012

Colorimetry

Color is an extremely fascinating and useful topic. It's also much deeper and more complex than it might appear on the surface. The subject of color spans many fields including physics, biology, psychology, philosophy, and art. Several months ago I started studying color in more depth from a few angles, including colorimetry: the measurement of color.

There are lots of color spaces and color appearance models that try to quantify the human perception of color. One of the newer and better ones is called CIE L*a*b*. CIE L*a*b* was designed to be perceptually uniform, that is, the distance between two colors in the space is proportional to the perceptual difference between them. Like all color spaces, it's based on three dimensions. In this case, L* is the perceptual lightness, while a* and b* represent the two chromatic channels of the opponent process model of color vision.

Recently, I used Photorealizer's sampling and image processing abilities to render a top-down view of the CIE L*a*b* (CIELAB for short) color solid showing only the sRGB gamut—the colors that are displayable on a typical computer screen. Here's the result:


Notice that the primary (red, green, and blue) and secondary colors (cyan, magenta, yellow) of your computer screen are located at the corners, and white is right in the center.

The colors in that image are not all at the same position on the lightness axis. Here's a version showing the lightness everywhere on that image:


Perceptual lightness is not just an average of the RGB values. Nor is it the same thing as relative luminance (Y), however it is derived from it. I won't go into too much detail here, but I'm planning on posting more color stuff in the future (and maybe I'll even extend this post).

If I just take a cross-section of the CIELAB color space in the a*–b* plane, all of the values have the same perceptual lightness. It looks like this at L* = 75:


And this at L* = 50:


If you were to look at the gamut of human color vision in CIELAB space, it would be much larger at any given lightness (and wouldn't have sharp corners). sRGB can only represent a fraction of the colors that typical humans can see.

Thursday, April 5, 2012

Teacup and Spoon in 720p

Recently I revisted one of my first carefully crafted renders, to re-render it in higher resolution and with less noise. I also wanted to save a high dynamic range OpenEXR version of it, because I hadn't yet implemented OpenEXR export when I made the original render. Some differences in my renderer subtley change the look of the scene, and I also tweaked the focus and depth of field, but for the most part it's just a higher quality version of the same render. No particularly new technology; just eye candy (which I hope to provide more of in the near future). Hope you like it.

My new 720p Teacup and Spoon render. Click for full size!

That image is the unmodified PNG that popped out of my renderer, but having the OpenEXR version (or any HDR or RAW format for that matter) would allow me to edit the image in the future without losing appreciable quality. I could even change the exposure. To illustrate this point, here are the EXR version and the LDR version, each with their exposures lowered two stops in Photoshop:

HDR OpenEXR version with exposure lowered two stops.
LDR PNG version with exposure lowered two stops.

Notice that the highlights (notably the ones at bottom left corner of the spoon) that were clipped in the original LDR version (at the top of the post) have now lost all detail in the adjusted version. But in the HDR version, new detail has been revealed.