CSG, fisheye and spotlights.

Monday, 5th May 2008

One way of constructing solids is to use a method named constructive solid geometry (or CSG for short). I added two simple CSG operators - intersection and subtraction - that both take two surfaces and return the result.

2008.04.29.07.png.thumb.jpg

In the above image, the surface that makes up the ceiling is created by subtracting a sphere from a plane. Of course, much more interesting examples can be created. For example, here is the surface created by taking a sphere and subtracting three cylinders from it (each cylinder points directly along an axis).

2008.04.29.08.png.thumb.jpg

One problem with the camera implementation was that it couldn't be rotated. To try and aid this, I used a spherical to cartesian conversion to generate the rays - which has the side-effect of images with "fisheye" distortion.

2008.04.30.02.png.thumb.jpg 2008.04.30.03.png.thumb.jpg

The above left image also demonstrates a small amount of refraction - something that I've not got working properly - through the surface. The above right image is the result of the intersection of three cylinders aligned along the x,y and z axes.

To try and combat the fisheye distortion, I hacked together a simple matrix structure that could be used to rotate the vectors generated by the earlier linear projection. The result looks a little less distorted!

2008.05.02.01.png.thumb.jpg 2008.05.02.02.png.thumb.jpg

The final addition to the raytracer before calling it a day was a spotlight. The spotlight has an origin, like the existing point light - but it adds a direction (in which it points) and an angle to specify how wide the beam is. In fact, there are two angles - if you're inside the inner one, you're fully lit; if you're outside the outer one, you're completely in the dark; if you're between the two then the light's intensity is blended accordingly.

2008.05.05.01.png.thumb.jpg

In the above screenshot, a spotlight is shining down and backwards away from the camera towards a grid of spheres.

If you're interested in an extremely slow, buggy, and generally badly written raytracer, I've uploaded the C# 3 source and project file. The majority of the maths code has been pinched and hacked about from various sites on the internet, and there are no performance optimisations in place. I do not plan on taking this project any further.

2008.05.05.02.png.thumb.jpg

Building and running the project will result in the above image, though you may well wish to put the kettle on and make yourself a cup of tea whilst it's working. smile.gif

Cylinders and translucent surfaces

Tuesday, 29th April 2008

2008.04.27.05.png.thumb.jpg 2008.04.27.06.png.thumb.jpg

The first addition to the raytracer was a cylindrical surface, represented by two end points and a radius. In the two screenshots above, the cylinder is infinitely long - not very useful. However, by calculating the point on the cylinder's axis that is closest to the struck point on its surface you can work out how far along its axis you are, and from that whether you are between either of the cylinder's ends.

2008.04.28.01.png.thumb.jpg 2008.04.28.02.png.thumb.jpg

The cylinder can have its ends optionally capped. To add the caps, you can create plane that has a normal that points in the the direction of the cylinder's axis. If you collide with the plane, you can then calculate the distance between the point you struck on it and the end coordinate of the cylinder. If this distance is smaller than the radius of the cylinder, you've hit one of the end caps.

2008.04.28.04.png.thumb.jpg 2008.04.28.05.png.thumb.jpg

Texturing the cylinders proved rather difficult. The v component of the texture component can be calculated by working how far along the cylinder's axis you are - easy. However, to wrap a texture around the rotational axis of the cylinder is a bit more complicated. In the first screenshot, I simply used Math.Atan2(z,x) to get the angle, and hence texture coordinate - but this only works if the cylinder points along the y axis. If I had another vector that lay perpendicular to the cylinder's axis I could use the dot product to work out its angle, but I don't. The cross product could generate one, but I'd need another vector to cross the axis with... In the end, this post came to my rescue, and I managed to get it working for cylinders pointing along any axis - producing the second screenshot.

An addition I've wanted to make for a while was support for translucent surfaces.

2008.04.29.01.png.thumb.jpg

This required a few changes to the structure of the raytracer. Previously, all methods for calculating ray intersections had to return a single Collision object, which contained a boolean flag specifying whether the collision was succesful. A translucent sphere would need to return two collision points - one as the ray enters the front and one as it leaves the back of its surface. To this end, all collision detection methods now return an array (empty or null if no collisions were made), and each collision has a flag indicating whether the collision was made by the ray entering the solid or leaving the solid (this is required, for example, to invert surface normals for correct shading).

Once the nearest struck point to the camera has been handled (including recursive reflection-handling code) it is checked to see if it's on a translucent surface. If so, the raytracer continues raytracing away from the camera, and blends the new result with the existing result based on its opacity. By stopping and starting again, one can adjust the direction of the ray - for example, to add a refraction effect (which I have not yet got working sad.gif).

One trick the above image misses is that it's still simply scaling down the intensity of the light by the opacity of the surface it passes through. It would look nicer if the light was coloured by the surface it passes through; so, in the above example, the white light shining through the blue water on the sphere should cast blue-tinted shadows.

2008.04.29.02.png.thumb.jpg

Whilst it's far from being real-time, I can still make it dump out a sequence of frames to create a basic animation.

2008.04.29.03.gif

Fixed reflections and proper textures

Sunday, 27th April 2008

Original post by downgraded
How long does that scene take to render?

Just over a minute, so not very good performance at all. I've made some changes since then (including multithreading) that drop it down to about 30 seconds.

2008.04.26.02.png.thumb.jpg
There is definitely something wrong with the reflections.

I decided to rewrite the main raycasting code from scratch, after seeing results such as the above. I'm not sure where the speckles were coming from, nor why the reflections were being calculated incorrectly. The new code writes to regions of an array of integers (for 32-bit ARGB output), and is designed much more simply. By splitting the output buffer into two halves I can perform the raytracing in two threads, which makes much better use of modern dual-core CPUs.

2008.04.26.03.png.thumb.jpg 2008.04.26.04.png.thumb.jpg
Before and after rewrite.

A scene with lots of reflective spheres would seem like a good test. If you look at the reflections in the outer ring of spheres, they're quite different (and now appear to be correct) now, so whatever was wrong now seems to have been fixed.

2008.04.26.05.png.thumb.jpg
A similar scene to the one at the top of this entry.

A scene with multiple reflective planes no longer appears to have the noise and reflection bugs that were clearly visible in the first screenshot in this entry.

Textures would certainly make the objects look a bit more interesting, but I couldn't think of a simple way of aligning a texture to a surface. I decided that textures should be treated as simple 2D rectangles, and each material can now have a diffuse texture applied to it (which provides a method Colour GetColour(Vector2 coordinate) to read it). To attach the texture to the surface of an object the surface needs to implement ITexturable, which exposes the method Vector2 GetTextureCoordinate(Vector3 surfacePoint).

In short; it's the job of the surface class (such as the Sphere or Plane classes) to map the struck point to a texture coordinate. This is most easily handled with the sphere, which simply converts the cartesian coordinates of the stuck point to polar coordinates.

2008.04.27.02.png.thumb.jpg
The small foreground sphere has an Earth texture.

For planes, I thought that the easiest way of aligning the texture would be to declare two vectors - one that represents the texture's X axis and one that represents the texture's Y axis.

For example, take the white wall at the back of the room in the above screenshot. To align a texture parallel to its surface, one could set the texture's X axis vector to point right and its Y axis vector to point down. By changing the magnitude of these vectors the texture can be scaled.

2008.04.27.03.png.thumb.jpg
The back wall and floor are textured planes.

For the floor in the above image, the texture's X axis points right, and its Y axis points into the screen.

As the texture merely has to provide a method that takes in a texture coordinate and outputs a colour, this lets us declare simple procedural textures.

2008.04.27.04.png.thumb.jpg
The floor and ceiling textures are procedurally generated.

The rather garish ceiling is declared like this in code:

this.Tracer.Objects.Add(new WorldObject() {
	Surface = new Plane(Vector3.Down, 10.0d) {
		TextureXAxis = Vector3.Right,
		TextureYAxis = Vector3.Forward,
	},
	Material = new Material() {
		Colour = Colour.White,
		Texture = new ProceduralTexture(
			p => new Colour(
				1.0d,
				(Math.Sin(p.X) * Math.Cos(p.Y * 2)) / 2.0d + 0.5d,
				(Math.Cos(p.X) * Math.Sin(p.Y * 3)) / 2.0d + 0.5d,
				(Math.Sin(p.X * 5) * Math.Sin(p.Y / 0.3d)) / 2.0d + 0.5d
			)
		),
	},
});

I think before I go any further I'm going to need to support a wider variety of surfaces than spheres and planes. Another limitation with the existing implementation is that only a single collision between a ray and a surface is reported, which limits what can be done with the renderer - for example, a glass sphere that refracts a ray that passes through it would need to report two collisions, one for the front of the sphere as the ray passes through and again one for the back as the ray leaves.

Raytraced shadows, reflections and chessboards

Saturday, 26th April 2008

I thought that better lighting might help the scene look a bit nicer, so decided removed all the existing lighting code (and reflection code, to make life easier) and tried to add some basic shadowing.

2008.04.25.01.png.thumb.jpg
Simple shadowing test.

When a ray's intersection with the world is found, a ray is cast back from that point towards the light source. If this ray collides with another object on its return trip to the light, it's assumed that it's in the shade. In the above test, points in shade simply had their diffuse colour divided by two. This still looks rather flat, though.

As we know the surface normal of the surface that has been struck and the direction of the ray that's going between the struck point and the light source, we can work out how much the surface point is facing the light by taking the dot product of the two vectors. When multiplied by the surface's diffuse colour, this results in much smoother lighting.

2008.04.25.02.png.thumb.jpg
Each colour value is multiplied by the dot product of the light beam and the surface normal.

That's better, but would look better with multiple lights. I start with a running total colour (initially black), then iterate over a list of lights. If the struck point is in line with the light, I add the surface diffuse colour multiplied by the light colour to this running total. This results in a much more interesting-looking scene.

2008.04.25.03.png.thumb.jpg
Multiple lighting test - two white and one red light, all in different positions.

The edges of the surfaces are rather ugly and noisy, probably due to rounding errors. They are helped if I offset the pixel coordinates by 0.5 (so the rays are shot through the centre of pixels, rather than the top-left corner), but proper supersampling would probably look better.

2008.04.25.04.png.thumb.jpg
Rendering with 4x supersampling.

It does, thankfully, at the expense of making the rendering time four times longer!

2008.04.25.05.png.thumb.jpg
Reintroduction of reflective surface support.

I reintroduced the reflective surfaces - those reflections don't look quite right to me, but I can't really tell. More detail in the world might make it easier, so I'd like to add some sort of texturing.

2008.04.25.06.png.thumb.jpg
The obligatory chessboard texture.

I'm undecided how to handle mapping struck points to texture coordinates. For the moment I'm just XORing together the x, y and z components together - if the least significant bit of the result is zero, return half of the diffuse colour, otherwise return the full diffuse colour.

Raytracing - Beware of the coder colours

Friday, 25th April 2008

As much as I claim to be interested in software rendering (be it as part of a game engine or as an effect in a demo), I've never actually written a raytracer. Having written some basic vector and plane arithmetic code for physics in the XNA Quake project, I thought I'd give it a stab.

SpheresAndPlanes.Thumb.png
No apologies made for the coder colours.

Currently, the world is just a simple List<WorldObject>, where each WorldObject has a Surface and Material property. The Surface has to implement IRayCollidable, which lets me call GetCollision(Ray) on it to find out where a ray strikes it (if at all), returning the point of collision and the normal of the surface that was hit. Currently, there are only two types that implement this interface - Plane and Sphere - but they'll do for testing.

For each ray, I iterate over the list of items in the world and grab the collision point. If a collision is made, I add the details to another list (including the total length of the ray at this point) and, if the surface's material is marked as reflective (ie, has a Reflectivity property greater than zero) I reflect the ray against the surface normal and cast again (recursively, so it's very easy to cause a StackOverflowException when two shiny surfaces are parallel to eachother).

Once I have a record of all the collisions, I sort them in back-to-front order based on the length of the ray, then iterate over them, blending the colours as I go (so a reflection in a green surface ends up being green tinted).

LitSpheresAndPlanes.Thumb.jpg
Marginally less garish.

To try and get a better sense of the 3D scene, I added a simple directional light. This simply takes the dot product of the hit surface normal and the light's direction, then multiplies it by the material's diffuse colour. The above screenshot has a light pointing directly away from the camera, hence the upper and left walls are completely black (however, the bottom and right walls, being reflective, are partially visible).

I've trying to do this without looking up the correct way of doing it, experimenting as I go - mainly in an attempt to try and patch up my rather poor handle on 3D maths and collision detection.

Subscribe to an RSS feed that only contains items with the Raytracing tag.

FirstLast RSSSearchBrowse by dateIndexTags