In todays post I am going to run a ray tracing algorithm, that in 126 lines of code is able to generate this picture:
The code is found here (here) and was created by user rossant.
I will assume the audience understands the basic motivation behind the ray tracing algorithm, which is well explained on this wikipedia page.
I will establish some of the important variables used in this code. These variables are defined many times throughout, but are always used to represent the same concepts.
ToL-a normal vector which points toward the light source in the frame.
ToO- a normal vector which points towards the Observer in the frame.
RayO, or O- The origin of the ray starting at the observer.
RayD or D- The direction of the ray which starts at the observer.
N- the normal vector of the object.
M-the point in space that we are considering.
Our Shading Model
Before we kick off, I want to really quickly go into the “science” behind the code. The two types of shading used in the model are summarized below. These shading relate to light which diffuses, and light which refracts in a specular way.
Lambert shading (diffuse).
This is for materials that scatter light, like a rug. It might seem like this type of data needs information from the observer, because the amount of surface area being hit depends on the observer, and how close the observer is to the main angle of incident obviously depends on the observer. Doing the math, one sees that these two effect perfectly cancel each other out, making it so that we only need information about how directly the light source hits the object.
Therefore it reduces to a simple dot product which does not depend on the observer.
AddToColor += DiffuseScalar * max( dot(NormalToSurface, NormalTowardsLight), 0) * RGBOfObject
Blinn-Phong Shading (specular).
This is for shiny materials that act like mirrors. Just from experience most would guess that this does depend on the viewer. Given a simple light source, a mirror works best if the normal vector to the mirror is normalize(ToO+ ToL), which would put the origin and the light at a perfect angle of incidence. In this shading model, we compare how close the surface is to putting us at a perfect incidence, and raise this to a power for quicker drop off.
col_ray += (ScalarConstant * max(dot(NormalToSurface, normalize(NormalTowardsLight + NormalTowardsObserver)), 0) ^ specular_k) * colorOfLight
Setting the Scene
Though the code is in a different order, to me it makes sense to start by explaining the functions which creates the objects in the scene. There is a sphere class and a plane class. These are both defined by the functions below.
A sphere is defined by its radius and center, and a plane is defined by a point on it and the normal vector. The diffuse, specular and reflection are constants which will lated be used to scale the weight of the shading methods described above.
Now we simply define a scene.
Some Helpful Functions
Now that we set everything up, lets define some nice functions for working with them.
Our first goal is to make the function intersect which simply tells us, for a given object in the set, how far away our first intersect is from it. This is obviously really important for vision, as the first object this “ray” intersects in the first object that we will see.
There are only two classes of objects in the image, spheres and planes, so we do this for both of them, and then string them together.
First we do this for a sphere, the inputs we need are simply the radius R and center C in order to proceed.
Same principle for a plane defined by a point P in it, and a vector N tangent to it.
Now we simply make a wrapper which is defined on the objects in the scene, which are only spheres and planes.
After this we define some more helper functions,
The trace_ray function
If you can believe it we are ready to write an algorithm, trace_ray which takes in a ray shot for the observer, and returns what it hits, the point, M, where it hits, the normal vector, N, at the point that is hits its first object, and the color of the ray.
First things first, we want to find the object that it hits. We simply find the object that minimizes the distance, t, from the observer.
Now we get the info we need to compute the output. Our distance, t, to the nearest object is useful for defining M.
Now that we have everything nicely defined, we can begin calculating the color. First we check to see if any light hits the object at all by checking iff something blocks it from the light. After that we simply apply our color algorithms explained in the beginning, including a constant vector for brightening.
Setting Up the Ray Tracing Algorithm
We are finally ready to go through the heart of this algorithm.
w and h are meant to be the number of pixels in the width and the height of our screen. They are set to be 300 and 400 respectively. Q will be explained later. img will become the image posted above.
Now we define the screen we are looking through by detailing its two corners. we define the screen so that the ratio of the height of the square to the width of the square is almost equal to number of pixels in height versus the number in the width.
I thought this was weird and got rid of the .25 terms, the result was indistinguishable to me.
The Ray Tracing Algorithm
Behold, the ray tracing algorithm, it is explained below.
Our entire algorithm is a nested for loop, iterating through the x value of a pixel and the y value. we add a cheeky percent calculating step for the user.
Now we do some setup. We use Q, which is the pixel value, to create D, which a vector that givens us the direction the camera is pointing in.
Now we can begin looping though the rays for any given pixel.
For the first layer of depth, we simply shoot a ray in the direction of the pixel, and update its color obtained from our tracing algorithm. We only update the value if the current object is reflective. We decide that the newest ray will start where our last one ended, and be in the direction where it will be reflected across N as expected. This can be seen with simple linear algebra.