Welcome to Thamine & Steven's CS283 Project 2

Project1 Project2 Project3 Project4 Source Code



100x100 (10,000 samples per pixel) with a refractive dragon rendering. Look closely at the purple box and notice the difference between looking at it by itself and looking at it through the dragon.

Introduction:

Welcome to our CS283 project2: "Global Illumation & Path Tracing". 

How we implemented Path Tracing:

Acceleration Methods for RayTracing:
-Axis-Aligned Bounding Box (AABB) volume heirachy
-OpenMP Parallelization
(These two above are concepts from 184)
-Russian Roulette
-Interreflection speedup
-Monte Carlo Sampling
(The three above are 283 concepts)
*Note* we implemented light sampling as one of the features that can included in our raytracer.

Russian Roulette:
-If a ray bounces more times than maxdepth, there is a probability that the ray will stop at the next intersection.
-We implement this by using weighted color properties (diffuse/specular) as the probabilities for rays bouncing after maxdepth.
-Otherwise recursion stops when light leaves the scene, or hits a light source.

Monte Carlo Sampling:
-We use probabilities to choose whether to implement direct lighting or indirect lighting.
-Using the color properties, we weight each ray by the inverse of it's probabilities.
-This allows for dynamic monte carlo sampling depending on the material properties.
-We also use monte carlo sampling to separate computation for reflected and refracted rays beyond maxdepth.
-We also incorporate monte carlo sampling to separate computation for objects with both diffuse and specular properties.

Interreflection speedup:
-We kept track of the materials previously visited by a given ray by maintaining the combined colors as a vector.
-Color combined = color material1 x color material2 x color material3 x ... x color materialN = generic multiplication of all the colors previously up to N.
-If the combined color is (0, 0, 0), then we immediately kill the ray. For example, if a ray bounced off a red surface (1, 0, 0) and bounced off a green surface (0, 1, 0), the combined color would be (0, 0, 0), and therefore we terminate the recursion because whatever color we obtain would be multiplied by a zero vector. This gave us an approximate speed up of 5-10x for images where surfaces didn't share colors.

Basic algorithm features:
-Ideal Diffuse reflection (Lambertian)
-Ideal Specular reflection, based on shininess of object
-Refraction & reflection
-Schlick's approximation for reflection/refraction coefficients
-Beer's law, exponential attenuation of light in refractive object
-Refraction for any closed mesh
-Implemented in cube primitive

Diffuse:
-Hemisphere sampling
    -construct a coordinate frame where the surface normal (n) is the z-axis and u,v are the two perpendicular axis
    -we generate a ray based on the following: ray = (sin(theta)sin(phi)*u + sin(theta)cos(phi)*v + cos(theta)*n where cos(theta) = random number in [0,1] and phi is a random number in [0, 2pi]
    -weight the reflected ray by cos(theta)

Specular:
-Hemisphere sampling except we want to impose a cutoff based on shininess.
    -construct a coordinate frame where the ideal reflected direction is the z-axis
    -we only need to change cos(theta) to cos(theta) = pow((1 - rand[0,1]), 1/(1+shininess)). Lim(shininess->inf) -> 1. The shinier the object the more cos(theta) -> 1
-Reflection is just ideal specular.

RayTracer effects:

-Soft shadows
-Caustics
-Anti-aliasing
    -Sampled rays randomly across pixels
    -No jaggies
-Interreflectence / Color bleeding from diffuse/specular reflections
-Direct light sampling
-BRDF is incorporated in the models for specular and diffuse reflections. -Bling and Phong (half-angle algortihm) and Lambertian hemisphere sampling 

Examples:  

Now to explain some of the more advanced scenes. The scene6ver2 is comprised of diffuse walls that are red green, magenta and white. There are two reflecting spheres in the scene with specular color set to .7 .5 .2. The shininess is set to a low 30 so the reflections are blurry. Additionally, there is a pink diffuse cube in the back left and an orange-gold diffuse cube in the front right. 5x5 (25 samples per pixel) 15x15 (225 samples per pixel) 30x30 (900 samples per pixel) 70x70 (4900 samples per pixel)

This next scene is different from the previous scene because now we have put in a refractive sphere with index of refraction of 1.5. We see a caustic form below the sphere. Additionally, the cube in the front right was modified to reflect ray specularly with orange-gold lighting. We also increased the sharpness of the reflections by setting the shininess to 500. 15x15 (225 samples per pixel) 100x100 (10,000 samples per pixel)

50x50 (2,500 samples per pixel) with a refractive torus from .off file in Project 1

 

Differences between RayTracer and PathTracer: 

A normal raytracer would produce this image below (with a point light source instead of a planar one). Our pathtracer produces more realistic images as shown above. We also see that the pathtracer has included soft-shadowing effects and anti-aliasing as compared to the other image. 


 

Things we ran into:

We had very very long rendering times. A simple 5x5 subdivision per pixel (25 samples per pixel) took around 100 seconds. The 15x15 subdivision per pixel took about 10-20 minutes, depending on the scene, and when we ran things overnight, our 70x70, 100x100, and 150x150 subdivisions per pixel took roughly 4 hours, 6 hours, and 12 hours, respectively. This made testing incredibly difficult because a noisy 5x5 image can't give us as much feedback (telling us whether or not our code was correct) than a 225 samples per pixel path tracer meant that we would have a lot of idle time where we just couldn't really test anything / check anything. The worst thing we had was when ran a 70x70 overnight and our end result was not what we expected (after we first tested our 15x15). That was the most disheartening thing. Also we're running on two 4 year old laptops which made the run times even slower than it should have been, even after we pragma omp'd the whole thing.

Overall: 

The project wasn't bad. Pretty fun project overall. We can see how these types of images look just like real photographs if rendered at a high enough sample per pixel, but our computers would have taken two days of running straight to render a truly epic picture.