Monday, April 25, 2016

Level of noise in unstratified renderers

When you get noise in a renderer a key question, often hard to answer, is is it a bug or just normal outliers?   With an unstratified renderer, which I often favor, the math is more straightforward.   Don Mitchell has a nice paper on the convergence rates of stratified sampling which is better than the inverse square root of unstratified.

In a brute force ray tracer it is often true that a ray either gets the color of the light L, or a zero because it is terminated in some Russian Roulette.   Because we average the N samples the actual computation looks something like:

Color = (0 + 0 + 0 + L + 0 + 0 + 0 + 0 + L + .... + 0 + L + 0 + 0) / N

Note that this assumes Russian Roulette rather than downweighting.   With downweighting there are more non-zeros and they are things like R*R'*L.   Note this assumes Color is a float, so pretend it's a grey scene or think of just each component of RGB.

The expected color is just pL where p is the probability of hitting the light.    There will be noise because sometimes luck makes you miss the light a lot or hit it a lot.

The standard statistical measure of error is variance.    This is the average squared error.   Variance is used partially because it is meaningful in some important ways, but largely because it has a great math property:

The variance of a sum of two random quantities is the sum of the variances of the individual quantities

We will get to what is a good intuitive error message later.   For now let's look at the variance of our "zero or L" renderer.   For that we can use the definition of variance:

the expected (average) value of the squared deviation from the mean 

Or in math notation (where the average or expected value of a variable X is E(X):

variance(Color) =  E[ (Color - E(Color))^2 ]

That is mildly awkward to compute so we can use the most commonly used and super convenient variance identity:

variance(X) = E(X^2) - (E(X))^2

We know E(Color) =  pL.    We also know that E(Color^2) = pL^2, so:

variance(Color) =  pL^2 - (pL)^2 = p(1-p)L^2

So what is the variance of N samples (N is the number of rays we average)?

First it is the sum of a bunch of these identical samples, so the variance is just the sum of the individual variances:

variance(Sum) = Np(1-p)L^2

But we don't sum the colors of the individual rays-- we average them by dividing by N.   Because variance is about the square of the error, we can use the identity:

variance(X / constant) = variance(X) / constant^2

So for our actual estimate of pixel color we get:

variance(Color) =   (p(1-p)L^2) / N

This gives a pretty good approximation to squared error.   But humans are more sensitive to contrast and we can get close to that by relative square-root-of-variance.   Trying to get closer to intuitive absolute error is common in many fields, and the square-root-of-variance is called standard deviation.   Not exactly expected absolute error, but close enough and much easier to calculate.    Let's divide by E(Color) to get our approximation to relative error:

relative_error(Color) is approximately   Q = sqrt((p(1-p)L^2) / N) / ( pL)

We can do a little algebra to get:

Q = sqrt((p(1-p)L^2) / (p^2 L^2 N) ) = sqrt( (1-p) / ( pN) )

If we assume a bright light then p is small,  then

Q is approximately sqrt(1/(pN))

So the perceived error for a given N (N is the same for a given image) ought to be approximately proportional to the inverse squareroot of pixel brightness, so we ought to see more noise in the darks.

If we look at an almost converged brute force cornell box we'd expect the dark areas to look a bit noisier than the bright ones.   Maybe we do.   What do you think?









3 comments:

Genius Lvl said...

I have a ray tracer with noise on the edges of primitives. It's dense vegetation, and I'm using a form of bi-directional path tracing. It's an weird form, actually closer to radiosity, where I pick sample points on each primitive in the vegetation, bi direction-ally path trace, with a directional light, and store the result on the primitive structure. Later, I render the scene with a ray caster (actually the same ray tracer but with 1 depth). I see noise around the edges of leaves, blacks and dark greys. Any clues ring a bell? No black spots on interior of the leaves.

Peter Shirley said...

Hmmm... I have actually never written a bi-directional path tracer (not many people have so props) so my intuition is poor.

In MC in general the first question is whether it is a bug or whether the importance sampling is just not that good. Run it hard core to convergence and see if it goes away.

Genius Lvl said...

Wow, how did you know? <--Silly question I guess. I actually just ran it with more samples per pixel in the ray caster part. Works great. Anyone programming in Embree should know that pixel samples are allowed to fall slightly outside of a pixel. So if you have one sample per pixel expect some rough edges.