Raymarching

Simon Ashbery
8 min readMay 29, 2019

This article is part of a series, you can find the others here:

Raymarching is a sort of very specialised version of raytracing, but it has a very specific purpose.

It lets you do cool stuff like this.

It is astonishingly efficient compared to traditional raytracing and whilst it is initially more expensive than rasterization to get running, once you have paid that cost it is very cheap to extend, manipulate and add to to the process.

That said, raymarching doesn’t render complex triangular meshes. Instead it works by interacting with mathematical formulae called signed distance functions (SDFs) to produce shapes which can be (relatively) easily expressed as a mathematical function.

The simplest of which is a sphere. All you need to work out the size and location of a sphere is a point in 3D space (expressed as XYZ coordinates, e.g. (1, 0, 1)) and a radius e.g. 5.

So instead of a mesh of folded triangles which could have thousands of associated numbers, we have four numbers which is a pretty impressive bit of spring cleaning. The tricky part though is how we can transfer that information to the screen.

A SDF can very elegantly describe a shape (or it would be fairer to say a volume) in 3D space, but it only works when you hand it a point in space to check against. It then returns a positive value if the point is outside of the volume, a negative number if it is inside the volume and zero(0) if it is at the surface.

For a sphere this number represents the distance from the presented point in space to the center of the sphere, minus the radius of the sphere.

We could iterate through every point in our a 3D space and pass it to the SDF, but that would be hugely resource intensive, and wouldn’t actually give us what we need to render the scene in relation to the camera.

So it would make sense to only check points that are visible to the camera. To do so, we can cast rays out of the camera lense, just like we do in backward raytracing, through the image plane and pass each point along the ray into the SDF until the function returns a 0. Brilliant!

Only, that is still a hell of a lot of points to check and would be enormously expensive to process.

So how about we reduce the amount of points we check by taking longer steps along the ray? That would bring down the overhead a lot, but we run the risk of stepping right past the surface of the volume defined by the SDF, at which point it won’t render properly.

And this is where raymarching saves us.

Raymarching employs a very elegant technique to ensure we are taking the minimum number of steps needed without blasting straight past the surface we are trying to land on. This technique is called sphere tracing.

Remember how we said that all you need to define a sphere is a point in space and a radius? We have our point in space which is the camera and for our radius we can grab the piece of geometry that is closest to this point. We don’t need to know exactly where that piece of geometry is, just how far it is from our starting point.

Fortunately our SDF provides this distance to us. If you recall, when we pass a point that sits outside the defined volume into a SDF it returns a positive number, that number is our distance from the surface of said volume.

Even if we have a bunch of different shapes in a scene each defined by an SDF, we can iterate through them all, passing our starting point into the SDF and then we know that the smallest number we get back is the closest surface to our point.

Once we know for sure that this is the closest surface to our point, we can use the number returned from the SDF as our sphere’s radius and voila, we have safely cordoned off a sphere of 3D space which we know is empty and thus safe to move in.

Knowing that we can move freely through this space, we can advance (or march) along our ray to the surface of our “safety sphere” get a new point in 3D space and start the process again.

We can continue this process over and over again until the number returned by an SDF is 0 (or as close as we are willing to tolerate) at which point we know we have hit the surface of our SDF defined sphere and we can go back to our pixel and tell it to render the extrapolated colour!

Back to the chimpanze headbutting a keyboard JavaScript!

const camera = new Camera(0, 0, 0);const sphere = new SphereSDF(1, 0, 1, 5);const scene = new Scene(camera, [sphere]);
const { imagePlane } = camera;const { geometry } = scene;
const maxRayDistance = 1000;const minDistanceTolerance = 0.001;
imagePlane.pixels.forEach(pixel => { const ray = castRay(pixel, camera); while (ray.distance < maxRayDistance && !ray.collided) { const { currentStep } = ray; const safeDistance = getSafeDistance(currentStep, geometry); if (safeDistance > minDistanceTolerance) { ray.currentStep = march(ray, safeDistance); ray.distance += safeDistance; return; } ray.collided = true; } pixel.color = sphere.color;});

Again for simplicity’s sake we’re just going to use a single sphere. In the outer scope we start the same way we did with backward raytracing. We iterate through each pixel in our image plan and cast a ray from the camera to it.

Then while the ray has yet to collide and we are within our maximum distance (if we miss all the SDFs in the scene we don’t want to be calculating forever) we get the closest distance we are to any surface and we march along the ray to our next point.

We repeat this process until we exhaust our maximum distance or our safe distance is sufficiently low that we can say we have hit the surface.

Signed distance function scenes can be illuminated with lights and shaders and the surfaces can have various colour properties, but for now we are just going to set our pixels colour to match the colour of our sphere.

“This seems like a very clever way to produce a sphere, but erm, why?” you query. You mock me with your words but it is a good question.

As I mentioned before, raymarching has very specific and powerful applications. Even though you could make this same scene with a rasterized or raytraced mesh it would be orders of magnitude more expensive to do so. That in and of itself is cool but it is in that resource efficiency that raymarching finds its power.

First consider the sphere made from a mesh and one defined by an SDF.

If you increase the number of triangles on a sphere you can make it look perfectly round, but this costs more resources and when you zoom in enough, you can see that it is a bunch of hard edges pretending to be a curve.

Like three children in a trench-coat trying to get into Terminator 2.

Not so with the SDF sphere. Because it is the result of a mathematical formula that can be recalculated on the fly, you can zoom in as much as you want, increase the resolution or look at it from any angle and it will always be a perfect curve.

You can also make SDFs interact with each other to make new more complex shapes:

You can combine, subtract and intersect SDFs:

Íñigo Quílez ( https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm )

Each of these defines a new SDF by mashing two others together and applying a simple operation on them. You can perform similar process with traditional rendering but it tends to be pre-processed as it is a lot more complex than working with SDFs but when you’re raymarching you have so much room to play with you can do all of these processes on the fly.

Alan Zucconi, (https://www.alanzucconi.com/2016/07/01/signed-distance-functions/)

You can even manipulate values in an SDF to round out edges, twist shapes, move them around, animate them and more.

Íñigo Quílez ( https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm )

Because of how the raymarching algorithm works you can also get a lot of effects for free that cost a lot with other rendering methods.

For a glow, you can track any rays that don’t hit a surface but that record one or more steps that were smaller than a given value and then assign a colour to those rays, boom, glow!

Flafla2, ( http://flafla2.github.io/2016/10/01/raymarching.html )

Or ambient occlusion, a shading technique that darkens nooks and crannies replicating how light is occluded from similar areas in the real world.

If you take any rays that do hit an SDF and then fire another ray perpendicular to that surface, you can step along it and get the distance of the nearest surface (other than the one you just collided with)

If the distance of each of the steps is small, the area must have been surrounded very closely by other geometry and so is darker than other areas.

http://9bitscience.blogspot.com/2013/07/raymarching-distance-fields_14.html

Both of these effects are quite expensive to achieve with traditional renderers but in raymarching they are virtually free.

There are a plethora of other effects and techniques that can be employed, but for me the crowning wonder of raymarching is in mirroring and repetition.

Because a scene of SDFs is just a product of a bunch of (relatively) simple formulae, it is trivial to package that scene up, apply some simple mathematical transformations and repeat it again and again and again, offsetting it a little every time.

CodeParade (https://www.youtube.com/watch?v=svLzmFuSBhk)

Like with the curve of the circle, this repetition is the result of a function that can be calculated very quickly and efficiently, so you can zoom and navigate through this alien landscape as much as you want and it will not melt your computer (guaranteed.)

You can also manipulate the values of the underlying SDFs on the fly and create an amazing living breathing fractal world, far beyond anything that you might build within the limits of triangular meshes.

Mikael Hvidtfeldt Christensen (http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/)

To the conclusion >

--

--

Simon Ashbery

Artist, game developer, software engineer, bipedal Labrador