Raytracing

Simon Ashbery
8 min readMay 29, 2019

This article is part of a series, you can find the others here:

To begin understanding raytracing, consider how illumination and visibility works in real life. A light source casts out an unfathomable number of rays (made up of photons). Some of those rays will collide with a surface of one kind or another and in doing so they will be transformed by the surface in question. In such a collision light can be absorbed, transmitted or reflected.

  • Absorbed light gives us colours, The collided surface absorbs some of the light and prevents it from reflecting onward. For instance, if a surface absorbs all the red and green light from a ray, the remaining light will be blue.
  • Transmission permits light to pass through a surface giving us transparency and translucency.
  • Reflection is fairly self-explanatory, reflected light bounces from the surface and continues on its merry way, like the blue light in our absorption example.

We can model this process in a computer and use it to render scenes like we saw with rasterization.

We call this process forward raytracing as it happens in the order you would expect in real life:

  1. A light source in 3D space casts a ton of rays out.
  2. Some of those rays collide with surfaces.
  3. The rays then bounce off that surface, transformed by the interaction (absorption, transmission and reflection.)
  4. They continue along colliding and bouncing until they find their way back to the camera or drift off into digital oblivion.

Of all the rays cast, only a few will ever make it back to the camera, the ones that do, intersect with a pixel in the image plane and like we saw in rasterization, impart their colour information.

The vast majority of the rays will continue bouncing for a set maximum number of bounces or they will continue to a maximum distance, run out of puff and decided to head home before they get too shagged out to put the dinner on.

Whatever the outcome, ultimately the overwhelming majority of the rays serve no purpose for rendering a scene and so occupy a lot of valuable computational resources for no reason.

Despite being more physically accurate, forward raytracing is the apple airpods of rendering techniques, it’s very expensive and only superficially more valuable.

So how can we leverage the power of raytracing without heating our computers up to temperatures not seen outside of Satan’s sauna?

How about we do it backwards instead by using the cleverly named backward raytracing?

Note:

Many solutions and several of the examples below use a hybridised raytracing solution with aspects of both forward and backward raytracing, but backward raytracing remains the core of the solution. At least to my understanding.

In simple terms backward raytracing flips forward raytracing on its head. Instead of casting rays from a light source and letting some of them hit the camera, the camera instead casts a ray out through every pixel in its image plane and then on into the scene.

It then tracks any collisions and at the collision point it bounces away towards a light source, if the ray makes it to said light, we know the surface was illuminated, if it hits another surface we know it is in shadow.

Note:

The number of times a ray is allowed to bounce has a strong impact on performance and the quality of the final image. I.E. The more bounces the better the fidelity but the more it costs.

The number of bounces can be configured to find the right balance between cost and quality which Sebastian Lague does a great job of demonstrating here.

Once a ray finishes its journey the path of the ray is traced back to the camera and its eventual colour value is used to determine the colour of its intersecting pixel.

Aristotle would be so pleased

Since we only need to account for rays that actually make it back to the camera, backward raytracing dramatically decreases the performance overhead required when compared to forward raytracing

That said, it is still not a cheap process by any means. it gets very complex very quickly, and while raytracing has been used in pre-rendering for many years, finding a workable solution to real-time raytracing has been a bit of a holy-grail for a long time, we’re only now seeing the first generations of commercial grade real-time raytracing and they are incredibly resource intensive.

Gosh but it’s pretty though.

Epic Games, NVIDIA, LMxLAB. and Disney

Straight out of the box with raytracing, we have a technique that is closely mimicking how lighting works in real life (even if the process is reversed the steps are mostly the same.) It stands to reason then that this helps make lighting behave more realistically, which it does.

But this is where it gets really funky. What do you suppose happens to the ray during all of those collisions and bounces? I mentioned before that the information the ray carries is transformed through absorption, transmission and reflection but what does this mean in practice?

Once the ray has hit a surface and bounced away towards a light, it can continue to hit things, transforming itself with each collision and illuminating surfaces as it goes.

We can observe this behaviour in real life. If you hold a brightly coloured object to a piece of white paper, you will likely see that the object is lending its colour to the paper because the object absorbed some of the colour and reflected what remains which is in turn reflected (bounced) from the paper

This same process is happening all around us and is what helps make the real world look so rich and vibrant, no shadow is ever truly black, it instead takes on gentle pieces of illumination from the world around it, surfaces that may seem totally rough are in fact subtly reflective and perfectly solid surfaces can be slightly translucent allowing some light to pass through it.

If you find yourself in a gallery looking at a Rembrandt or a Monet really scrutinise the shading in the shadows and highlights and you will see a rich tapestry of varying hues and shades which come together from afar to appear as something breathing, illuminated and alive.

The Man with the Golden Helmet by Rembrandt.
The Haystacks, End of Summer by Monet

The rays of light bounce through the scene, transforming and picking up information from each surface it bounces off and finally it hands that information over to its intersecting pixel which can then work out which colour it should be.

The results on a rendered screen are incredible, static lifeless scenes suddenly appear rich with a life and movement of their own.

METRO Exodus NVIDIA RTX demos by Nikita Shilkin (https://www.artstation.com/artwork/vn1Bx), Metro Exodus by 4A Games, published by Deep Silver

This technique gives us a lot of great tricks for “free.” Reflections for instance have always been a problem with rasterization because there is no physical basis for how it should work when you’re firing rays from a vertex to a camera, instead it had to be faked only somewhat convincingly.

Use of a reflection map shown in Satisfactory by Coffee Stain Studios.

With raytracing if a ray bounces off a fully reflective surface, the ray knows that it can retain all (or most) of its colour values, unchanged and will carry that off to the camera but now coming from a new point in space and, boom, reflections.

Battlefield V by DICE and EA Games

If we build another simplified JavaScript function for backward raytracing we could explain it thus:

(For the sake of simplicity I’m going to assume there is only one light.)

const camera = new Camera(0, 0, 0);const sphere = new Sphere(0, 0, 1);const light = new Light(1, 1, 0);const scene = new Scene(camera, [sphere], light);
const { imagePlane } = camera;const maxRayBounces = 3;const maxRayDistance = 1000;
imagePlane.pixels.forEach(pixel => { const ray = castRay(pixel, camera); while (ray.distance < maxRayDistance &&
ray.bounces < maxRayBounces) {
advance(ray); scene.geometry.forEach(shape => { const collision = collision(shape, ray); if (collision) { ray.direction = calculateDirection(collision, light); ray.collisions.push(collision); ray.bounces += 1; return; } }); ray.distance += 1; } pixel.color = calculateColor(ray.collisions);});

Again this is a heavily simplified explanation rather than actual working code. It illustrates that:

  1. We iterate through every pixel in the image plane, casting a ray between the camera and it.
  2. While we are within our maximum allowed distance and bounces we advance the ray forward and check all of the objects in the scene to see if the ray collides with any of them.
  3. If we do collide we calculate a new direction from the collision point to the light (you can also see this as firing a new ray), we record the collision so we can step through it later and we increase the number of bounces taken thus far.
  4. If there is no collision we record the increase in distance traveled and advance the ray again.
  5. Once we have exhausted either our max distance or bounces we set the pixel’s colour by tracing back along the path of our ray from the light (or nothingness) and working out how the ray’s colour transforms at each collision.

You can see how this could get very complex very quickly, with more scenery, lights and pixels to deal with it becomes a very expensive process. Still, it does look very pretty.

Remedy Entertainment, Northlight and Nvidia

To raymarching >

--

--

Simon Ashbery

Artist, game developer, software engineer, bipedal Labrador