Saturday, May 26, 2012

First Images with Dispersion

I recently implemented dispersion in Photorealizer using some of the ideas I described in a previous post. I'm using the Sellmeier equation along with material-specific Sellmeier coefficients to determine the refractive index of the material at any given wavelength. Right now I'm only tracing three specific wavelengths, one for each of red, green, and blue. Because the sRGB primaries are not spectral colors, I simply chose nearby spectral colors, keeping things simple for now.

Here is a render of diamonds with dispersion. The color in this image comes entirely from dispersion.

Diamonds with dispersion. Click to view full size.

Here's the same render, before bloom was applied (my bloom is a post-process; the renderer applies it to the completed image):

Same render as above, but without bloom.

Here is a similar scene rendered with and without dispersion, for comparison. The color in these images comes entirely from dispersion as well. The effect of dispersion is a little more subtle in these images than in the ones above.

Dispersion. Bloom.
No dispersion. Bloom.
Dispersion. No bloom.
No dispersion. No bloom.

At some point I'd like to develop a more robust spectral rendering framework, so I can more accurately render dispersion, as well as interference, diffraction, and other wavelength-dependent effects for which three wavelengths (R, G, and B) are insufficient.

Here are a few earlier dispersion renders:





Image Processing

Here are a few examples of a few of the post-processing features I've built into in Photorealizer. The differences are somewhat subtle in some of these images. I could have pumped up the filter settings more in some of images here to show the effects better, but I was mainly creating them for my own testing. To see the differences as clearly as possible, you'll probably want to click them to open them in Lightbox then switch back and forth between them. The material I used here is a slightly rough green glass using the microfacet model for transmission through rough surfaces. I limited the trace depth, so parts of the Lucy model aren't as full and saturated as they would be with higher quality settings, but the images are still useful for illustration and comparison.

Even newer transfer curve, plus lower exposure (added 1/17/2013).

Even newer transfer curve (added 1/17/2013). This curve does a better job of increasing the contrast of the perceptual midtones, and better maintains the overall luminosity of the image.

My new sinusoidal contrast curve, maxed out.

New contrast curve, plus bloom. Bloom can make the blown out brights look brighter, and can make renders look a little more realistic, however the bloom is probably too extreme in this case, and there's room for improvment in other ways as well.

My new sinusoidal contrast curve, with a wet/dry mix setting that mixes the curve with a straight line. Unprocessed linear images can look too flat when displayed on a computer screen, exactly like an unprocessed raw image from a digital camera.

My old contrast curve with circular roll-off. Darkens darks, lightens lights, and slightly increases contrast.

Just gamma corrected. Display-linear.

Not gamma corrected.

My old contrast curve.

No contrast curve (just gamma corrected).

Graph of my old contrast curve (rendered in Photorealizer, overriding ray tracing).

Wednesday, May 9, 2012

Rough Transmission Update

I rendered a few high quality images using the microfacet model for transmission that I implemented recently. First, here's a smooth glass version for comparison:

Smooth glass.

Next, here's an image showing ground glass using the Beckman distribution:

Ground glass using the Beckman microfacet distribution.

And finally, here's an image showing ground glass using the GGX distribution:

Ground glass using the GGX microfacet distribution.

I do like the look of the GGX distribution, but it has a few issues. It looks very good overall, but it's somewhat dark, because the GGX distribution has more shadowing and masking at this high roughness.

Another issue is that there are some fireflies (really bright spots), which are only occuring with GGX, but not Beckman. I have a feeling the fireflies could be due to very large sample weights that occur in rare cases, but I haven't looked into it deeply yet.

The paper ("Microfacet Models for Refraction through Rough Surfaces") mentions that sample weights can get huge at grazing angles (when i is close to perpendicular to n). They suggest widening the distribution width at grazing angles and provide math that works well for the Beckman distribution, which I'm currently also using for GGX, but I might need to tweak that math for GGX.

It also looks like the sample weight would get very large if the sampled microsurface normal is close to perpendicular to the macrosurface normal, and that seems to be much more likely with the GGX distribution because of its wider tails. So I should look into that as well.

Tuesday, May 8, 2012

Diffuse Fresnel Reflectance

*** Update: After Anders left his comment suggesting that I might be using an inverted IOR, and that total internal reflectance could account for the unexpectedly high results, I looked into this again more carefully. It seems that my relative IOR is in the same form as the papers, but it turns out that I was indeed misinterpreting the meaning of Fdr, and that Fdr is actually the average reflectance of light incident to the inside of the surface of the subsurface scattering medium, not the outside as I had assumed. Previously, I had jumped to some conclusions without doing quite enough investigation and testing. This time I looked more carefully at the graphs of the relevant functions. Then I looked at the paper that the Fdr approximation originally comes from, Egan et al. 1973, which calls the value in question (the value denoted Fdr in the new sources) the "internal diffuse surface reflectance", which to me is a much more descriptive name than anything I saw in the new sources, all thanks to the word "internal". Egan et al. 1973 also gives a different formula for the "external diffuse surface reflectance" which is what I had thought Fdr was supposed to be! Had I looked at the Egan et al. 1973 paper before, the descriptive and accurate names might have instantly given away the issue I was having; sometimes a few well-chosen words can make a concept much more clear. Finally, after making sense of things, I ran more tests, comparing formula results to simulated values. This new interpretation of Fdr matches my simulation results much more closely over a much wider range of values. I haven't yet figured out how this new interpretation makes sense in context, but at least the Fdr and Fdt formulas make sense now. I'll post more details about this stuff once I look into it some more. ***



I've been working a lot on my multiple scattering approximation, and just yesterday, as I mentioned in the previous post, I solved a significant Fresnel-related issue, one which stems from what appears to be an error or oversight in multiple sources (unless I'm totally missing something).

First some context. The resources I've been referring to in implementing point-based diffusion-based multiple scattering, including the Jensen papers I've used and the PBR book, multiply each irradiance sample (or equivalently, the summed radiant exitance at the look-up point) by the diffuse Fresnel transmission to take Fresnel reflection into account. That much makes sense.

To do that, they provide approximations for the diffuse Fresnel reflectance (Fdr), i.e. the Fresnel reflectance integrated over the hemisphere, and then they take 1 Fdr to find the diffuse Fresnel transmittance Fdt. However, I've noticed that doing this makes objects appear unnaturally dark. A while back I checked Fdr for a number of different IORs and noticed that it seemed far too large (around 47% and 66% for water and glass respectively), but I didn't think much of it at the time. Then the other day I was watching a Richard Feynman video, and he mentioned that the total reflectance of water and glass are around 5% and 10% respectively, which got me thinking again. So today I computed an actual integral of Fresnel reflectance for various IORs (using Monte Carlo integration, and selecting samples from the cosine distribution), and got values similar to what Richard Feynman had mentioned.

Then it finally hit me, that Fdr is a hemispherical integral, so it's not the average Fresnel reflectance as I had assumed, but rather the average Fresnel reflectance times 2π, which I quickly confirmed. So you can't just subtract Fdr from 1 to get Fdt, as the papers and book do—you need to divide it by first!

And if instead, we were to subtract Fdr from 2π to find Fdt, then Fdt would be the average transmittance times 2π, so we couldn't just multiply that times the irradiance to find the transmitted fraction of the irradiance, as they do in the papers and book, because the irradiance is already an integral over the hemisphere. In most cases, we would actually increase the irradiance by doing that, which doesn't make any sense—we'd be creating energy out of nothing.

The term "reflectance" typically refers to the fraction of light reflected, expressed as a proportion between 0 and 1 (in photon terms it's literally a probability). From Wikipedia, reflectance (or reflectivity—the difference is subtle and is irrelevant in this case) is "commonly averaged over the reflected hemisphere to give the hemispherical spectral reflectivity". And the 2001 Jensen paper literally calls Fdr the "average diffuse Fresnel reflectance" which is incorrect and misleading.

I have noticed some other typos and ambiguities in the papers I've been referencing a lot, but I'm really not sure how or why this particular error was glossed over or neglected in multiple sources. It's a pretty significant error or oversight as far as I can tell. The papers provide explicit formulas for Fdr and the conversion to Fdt, but clearly never divide by 2π. And they say Fdr is the Fresnel reflectance integrated over the hemisphere, and show the integral for integrating over all 2π steradians of the hemisphere, but then they call Fdr the "reflectance", and treat it like it goes from 0 to 1. Pretty strange.

I hope this isn't coming across the wrong way—I definitely still think that these are excellent papers and an excellent book, even if these things turn out to be errors.

The papers also use Fdr in the boundary condition A in the diffusion approximation. Now I'm doubting now whether that instance should be left in terms of . I just realized that A could easily end up being negative if Fdr is left in terms of 2π.

In case this has held your interest this far and you're actually implementing this stuff yourself, it might also be worth mentioning here that you should clamp Fdr between 0 and 1 (unless you actually perform the integral, which you would actually only have to do once per material), because the approximation formulas blow up with high IORs.

Subsurface Scattering Update

I just finished rendering a very large and high quality image with Monte Carlo subsurface scattering. This model, the Lucy model, shows off the effects of subsurface scattering better than the bunny, because it has a lot more variation in shape and thickness. For this render I went for the look of an exotic blue stone: I made it moderately backwards scattering, and gave it a pretty high index of refraction:

Monte Carlo subsurface scattering. Click to enlarge (actual size is 800x1200).

Now, I'm using this as a reference for my approximate multiple scattering implementation. I've been testing my implementation, cross-referencing the math with other sources besides the Jensen 2002 paper (including the PBR book, other Jensen papers, and the Quantized Diffusion paper), and adding new features. I've corrected a few issues, and controlled for a few corner cases, and now, after a lot of iterations (which I may post some images of later), it's working very well overall. Here's the current state of my hierarchical point cloud dipole diffusion-based multiple scattering:

Dipole diffusion approximation to multiple scattering.

As you can see, it's pretty similar to the Monte Carlo reference. Some differences are expected because it's an approximation, and because it assumes a semi-infinite homogenous slab of material, which is clearly not the case in this render (the bunny is a little better in this regard). I do question whether the hue difference is larger than it should be; however, I believe this might be primarily due to the shortcomings of photon diffusion theory, which I'll elaborate on later in the post.

One notable source of inaccuracy in the Jensen 2002 paper (and many other sources) is that they use use classical diffusion and derive certain equations ad hoc without maximum physical accuracy. I'm currently working on mitigating some of these inaccuracies by implementing a few of the state-of-the-art improvements that are presented in the Quantized Diffusion paper, some of which were inspired by decades-old work in the field of neutron transport (as the authors mention in their video, and as I've observed and read about myself, cross-pollination between fields is very important for innovation and creativity, and if we pay attention to other fields we won't have to spend as much time reinventing the wheel). I've already implemented Grosjean's approximation for diffusion, as well as the improved boundary condition (A, the change in fluence rate due to internal reflectance) described in the paper.

I also implemented the diffuse reflectance part of the BRDF approximation from the Jensen 2001 paper. When computing irradiance during the pre-pass, I use the BRDF approximation when a ray hits a subsurface scattering object (because the point cloud has not yet been generated; it's actively in the process of be generated). This makes the irradiance much more accurate. Here's an image rendered using the BRDF approximation instead of the diffusion approximation:

The diffuse reflectance part of the BRDF approximation (no subsurface scattering in this image). Notice that it's totally opaque. This is how the object looks to rays during the irradiance computation stage of the pre-pass.

An important piece that I haven't added yet is the single scattering term. I did run a test using my Monte Carlo subsurface scattering with limited depth, and it seems that single scattering won't have much effect overall in this image. However, it will definitely improve the appearance of the very thin areas, such as the hand, the edges of the dress, and the bottom of the torch, which are dominated by single scattering, and which seem to be the most inaccurate areas in the diffusion render. Unlike the original formulation, Grosjean's approximation correctly decouples multiple and single scattering. I'll also add the single scattering term to the BRDF approximation.

While most of the differences in the images are expected, the hue discrepancy bothers me a little. However it's likely that the hue discrepancy is to be expected, just because of the limitations in the diffusion approximation, in particular the fact that diffusion theory assumes that the scattering coefficient is much higher than the absorption coefficient (which means it has relatively high albedo) (see first paragraph of introduction here, and see here). This would imply that the blue channel is most accurate, which seems to be the case when I compare the individual channels in Photoshop. The reason the hue shifts so much is likely because the R, G, and B wavelengths used in the image have very different absorption coefficients, so they are affected by the inaccuracies in the diffusion approximation to different degrees. From my testing so far, I don't think there are any bugs in my diffusion code, and I'm confident that my Monte Carlo Monte Carlo subsurface scattering is working correctly (the Monte Carlo version is also more intuitive and easier to check just by looking at the code), but I'll want to look into both of these things deeper just in case. (As an aside, it's very useful implementing things multiple ways and then being able to compare them and make sure the results are the same. I've done this with other things in Photorealizer, too. One example that comes to mind is direct vs. passive light sampling.)

Just yesterday, I solved a significant Fresnel-related issue related to the diffuse Fresnel reflectance term Fdr and corresponding transmittance term Fdt, which had resulted from problems in my sources. I'll elaborate on this more in my next post.

Also Fresnel-related, I spent some time trying to figure out how many Fresnel terms to include in the final irradiance. Some papers and sources multiply by two, one for the incoming and one for the outgoing light, the Quantized Diffusion paper multiplies by two but then also adds a new normalization term to correct past papers, and one of the papers seems to only include one Fresnel term. Originally, multiplying by two Fresnel terms didn't make complete sense to me. Here's a thought experiment to see why: Imagine you have a glass of milk (or just water) that doesn't absorb any light at all. Then all of the light that goes into the milk—around 93% on average according to the Fresnel equations—would eventually come back out. On average, around 75% of the light that went in would come back out on the first internal bounce, then 75% of the remaining 25% on the second internal bounce, etc. Since there would be no absorption, what else could could the light do other than eventually come back out? (Unless it got trapped in there somehow!) This would imply that the Fresnel transmittance only applies once, not twice! However, after reviewing some of the formulas, it looks like the boundary condition A in the diffusion approximation takes internal reflectance into account, so it seems that multiplying by two Fresnel terms might make sense after all; however, I haven't examined it closely enough to be certain about that yet.

While the hierarchy traversal and diffusion approximation evaluation are already quite fast algorithmically, there's still room for performance improvement, because I haven't optimized the code much yet (which would have been premature optimization if I had). But the slowest part is the irradiance computation stage of the pre-pass, which I've been performing using path tracing (and direct illumination and all of the other features of my path tracing core). It would be very fast with direct illumination only and no global illumination (which is all that some renderers do), but also less accurate. Also the irradiance computation in the pre-pass is not multi-threaded, which makes it take even longer (in wall clock terms).

By the way, I made it so that I can generate separate point clouds per object (or not if I choose). The point cloud can be associated with any container object in my scene graph. The point cloud is generated using all of the descendents of that node. I also made it work with instancing and transformations. This is a more flexible approach than many renderers take.

Sometime soon, I plan on adding wavelength-dependent scattering coefficients, and implement some other useful features as well.

I also wanted to mention that the dipole diffusion approximation assumes isotropic scattering, so it uses the reduced scattering coefficients. Because of this, a diffusion image would technically be more comparable to a reduced and isotropic Monte Carlo reference. So I rendered a version of the Monte Carlo render at the top of the post using reduced scattering coefficients and isotropic scattering. In this case the difference is fairly subtle, but that wouldn't always be the case. Here's the image:

This image is the same as the Monte Carlo subsurface scattering image at the top of the post, except this time using reduced scattering coefficients and isotropic scattering.

And finally, here's an earlier render with forward scattering and a lower index of refraction, giving it a more organic look:

Earlier Monte Carlo render with scattering properties that make it look a little more organic.

Saturday, May 5, 2012

Diffuse and Specular Reflection

A deeper understanding of the physics of light can help us write better renderers, and beyond rendering, can give us a deeper understanding and appreciation of the real world. In this post, I'll talk about the nature of diffuse and specular reflection, and address some potential misconceptions.

An illustration of diffuse and specular reflection.
(The diffuse part appears round instead of semicircular,
because the image shows radiant intensity, not radiance.)
Image by GianniG46, found on Wikimedia Commons.

Diffuse reflection (at least the type we talk about in computer graphics), is not, as some sources would lead you to believe, reflection from a rough surface. It's not a special case of specular reflection, but rather a separate phenomenon that can coexist with specular reflection. It's not the case that there's a continuum between diffuse and specular reflection; as an example, no matter how smooth you make a block of marble (or granite, plastic, ceramic, paint, etc.), it will never turn into a perfect mirror [1]. Specular reflection is surface reflection, and diffuse reflection can be thought of as body reflection [2]. The primary means of diffuse reflection is actually subsurface scattering and absorption [1]. Rough surfaces can cause a sort of diffuse reflection, but we usually classify that and any other surface reflection as specular reflection [3][4].

A close-up illustration of diffuse reflection.
Image by GianniG46, found on Wikimedia Commons.

All dielectric materials (insulators / non-metals) are translucent and exhibit subsurface scattering and absorption to some extent [5]. Light enters a material, is scattered around and partially absorbed below the surface, and comes back out somewhere else at a different angle [6]. As the light propagates and scatters through the medium, certain wavelengths are absorbed more strongly than others, giving the object its color (e.g., the yellow of a banana or the green of grass). Absorption is a subtractive process, just like the way ink makes paper appear a certain color by absorbing and subtracting certain wavelengths of light. Different wavelengths are scattered differently as well, which also contributes to the appearance of the material. We can approximate the result of this subsurface scattering with a BRDF like the Lambertian reflection model, if we assume that light enters and exits the material at the same point, and exits in a random direction. However, when this is not the case, we need to use a BSSRDF or otherwise simulate subsurface scattering to make the object look realistic. Either way, this diffuse reflection / subsurface scattering and absorption is the primary means by which we see objects in the world [1].

Dielectrics also exhibit specular reflection from their surfaces based on their refractive indexes. When light strikes an interface between dielectrics with different indexes of refraction, some of the light is reflected and some of the light is transmitted. The proportion of light reflected or transmitted is based on the incident angle and the relative index of refraction, and can be predicted by the Fresnel equations. (The Fresnel equations also predict phase shift and polarization, but these things are almost always ignored in computer graphics, because in common cases they don't have much effect on the appearance of the render.) The amount of diffuse reflection is actually dependent on the amount of specular reflection; i.e., only the light that is not reflected from the surface of the object is transmitted and is available for subsurface scattering (and thus diffuse reflection). Renderers that simply add diffuse and specular contributions will not conserve energy, especially at grazing angles where surface reflectivity approaches 100% (try viewing a flat object from a grazing angle and you'll notice that it practically turns into a mirror.). And renderers that try to overcome the problem using global diffuse and specular multipliers will never be able to achieve diffuse or specular of full brightness and saturation.

Metals (conductors) behave differently than dielectrics in a number of ways. They do not transmit light, and do not exhibit subsurface scattering [7]. The characteristic color of copper or gold is not due to diffuse reflection, but rather certain wavelengths being reflected more strongly from the surface than others. For non-metals, the color of specular reflection is generally the same as the color of the illumination (at least within the visible spectrum), but for metals this is not necessarily true.

Spectral reflectance curves of three different metals.
Image from Wikimedia Commons.

Some sources will use the term specular reflection to refer specifically to ideal (perfectly flat mirror) specular reflection from a smooth surface. In computer graphics however, specular reflection generally can be from a smooth or rough surface. Sometimes we also use the term glossy reflection to refer to specular reflection from a rough surface. We simulate ideal specular reflection using the basic angle of incidence equals angle of reflection law, and we simulate specular reflection from rough surfaces using a microfacet model, where the surface is composed of infinitesimally small ideally specular facets.

Not only is diffuse reflection not caused by surface roughness (although surface roughness certainly results in more diffuse specular reflection in the general sense of the word), the Lambertian reflection model that is prevalent in computer graphics specifically assumes a perfectly smooth surface. Many real-life things—such as the moon, concrete, or plaster—don't actually exhibit Lambertian reflectance at all. This is because these things have rough surfaces. Unlike Lambertian (smooth diffuse) surfaces which appear the same from any viewing angle, the appearance of rough diffuse surfaces varies based on viewing angle (it's directional). We can model this appearance analogously to how we model rough specular reflection, in this case using a microfacet model where each microfacet is a Lambertian reflector. The Oren-Nayar BRDF model does exactly this [2][8].

Things aren't quite as simple in the real world as I've descibed here (or maybe they're simpler depending how you look at it), but the things above should nonetheless be useful for creating more realistic renders [9]. Most of ray tracing deals with geometric optics, but in reality light is electromagnetic radiation with many strange and mysterious properties, such as its dual wave–particle nature, and the fact that it's composed of photons which behave probabilistically [10]. A more complete understanding of how it works requires an understanding of quantum mechanics, relativity, etc., which are very low-level, fundamental things that are very unintuitive for humans because they can't necessarily be explained in terms of things we're familiar with first-hand, things from our everyday lives. And nobody understands fully why these things work the way they do. Luckily, approximations are good enough for computer graphics and many other applications. To truly simulate all effects of light accurately, you would basically have to program the universe.

Things are not always as they seem.
(Shown here, a Feynman diagram, used in quantum mechanics.)
Image from Wikimedia Commons.



References and Further Reading:
[1] Wikipedia: Diffuse reflection
[2] Generalization of Lambert's Reflectance Model (the Oren-Nayar paper)
[3] Wikipedia: Scattering 
[4] A Comprehensive Physical Model for Light Reflection
[5] A Quantized-Diffusion Model for Rendering Translucent Materials
[6] Photon Path Distribution in Inhomogeneous Scattering Media (starting at page 30)
[7] A Microfacet-based BRDF Generator (metals; end of section 5.1 mainly)
[8] Oren-Nayar reflectance model
[9] An Inexpensive BRDF Model for Physically-based Rendering (Schlick's 1994 paper)
[10] Richard Feynman lectures on quantum electrodynamics (QED) (excellent videos; there are also lots of great Richard Feynman videos on Youtube)