Fixed reflections and proper textures

Sunday, 27th April 2008

Original post by downgraded
How long does that scene take to render?

Just over a minute, so not very good performance at all. I've made some changes since then (including multithreading) that drop it down to about 30 seconds.

2008.04.26.02.png.thumb.jpg
There is definitely something wrong with the reflections.

I decided to rewrite the main raycasting code from scratch, after seeing results such as the above. I'm not sure where the speckles were coming from, nor why the reflections were being calculated incorrectly. The new code writes to regions of an array of integers (for 32-bit ARGB output), and is designed much more simply. By splitting the output buffer into two halves I can perform the raytracing in two threads, which makes much better use of modern dual-core CPUs.

2008.04.26.03.png.thumb.jpg 2008.04.26.04.png.thumb.jpg
Before and after rewrite.

A scene with lots of reflective spheres would seem like a good test. If you look at the reflections in the outer ring of spheres, they're quite different (and now appear to be correct) now, so whatever was wrong now seems to have been fixed.

2008.04.26.05.png.thumb.jpg
A similar scene to the one at the top of this entry.

A scene with multiple reflective planes no longer appears to have the noise and reflection bugs that were clearly visible in the first screenshot in this entry.

Textures would certainly make the objects look a bit more interesting, but I couldn't think of a simple way of aligning a texture to a surface. I decided that textures should be treated as simple 2D rectangles, and each material can now have a diffuse texture applied to it (which provides a method Colour GetColour(Vector2 coordinate) to read it). To attach the texture to the surface of an object the surface needs to implement ITexturable, which exposes the method Vector2 GetTextureCoordinate(Vector3 surfacePoint).

In short; it's the job of the surface class (such as the Sphere or Plane classes) to map the struck point to a texture coordinate. This is most easily handled with the sphere, which simply converts the cartesian coordinates of the stuck point to polar coordinates.

2008.04.27.02.png.thumb.jpg
The small foreground sphere has an Earth texture.

For planes, I thought that the easiest way of aligning the texture would be to declare two vectors - one that represents the texture's X axis and one that represents the texture's Y axis.

For example, take the white wall at the back of the room in the above screenshot. To align a texture parallel to its surface, one could set the texture's X axis vector to point right and its Y axis vector to point down. By changing the magnitude of these vectors the texture can be scaled.

2008.04.27.03.png.thumb.jpg
The back wall and floor are textured planes.

For the floor in the above image, the texture's X axis points right, and its Y axis points into the screen.

As the texture merely has to provide a method that takes in a texture coordinate and outputs a colour, this lets us declare simple procedural textures.

2008.04.27.04.png.thumb.jpg
The floor and ceiling textures are procedurally generated.

The rather garish ceiling is declared like this in code:

this.Tracer.Objects.Add(new WorldObject() {
	Surface = new Plane(Vector3.Down, 10.0d) {
		TextureXAxis = Vector3.Right,
		TextureYAxis = Vector3.Forward,
	},
	Material = new Material() {
		Colour = Colour.White,
		Texture = new ProceduralTexture(
			p => new Colour(
				1.0d,
				(Math.Sin(p.X) * Math.Cos(p.Y * 2)) / 2.0d + 0.5d,
				(Math.Cos(p.X) * Math.Sin(p.Y * 3)) / 2.0d + 0.5d,
				(Math.Sin(p.X * 5) * Math.Sin(p.Y / 0.3d)) / 2.0d + 0.5d
			)
		),
	},
});

I think before I go any further I'm going to need to support a wider variety of surfaces than spheres and planes. Another limitation with the existing implementation is that only a single collision between a ray and a surface is reported, which limits what can be done with the renderer - for example, a glass sphere that refracts a ray that passes through it would need to report two collisions, one for the front of the sphere as the ray passes through and again one for the back as the ray leaves.

Raytraced shadows, reflections and chessboards

Saturday, 26th April 2008

I thought that better lighting might help the scene look a bit nicer, so decided removed all the existing lighting code (and reflection code, to make life easier) and tried to add some basic shadowing.

2008.04.25.01.png.thumb.jpg
Simple shadowing test.

When a ray's intersection with the world is found, a ray is cast back from that point towards the light source. If this ray collides with another object on its return trip to the light, it's assumed that it's in the shade. In the above test, points in shade simply had their diffuse colour divided by two. This still looks rather flat, though.

As we know the surface normal of the surface that has been struck and the direction of the ray that's going between the struck point and the light source, we can work out how much the surface point is facing the light by taking the dot product of the two vectors. When multiplied by the surface's diffuse colour, this results in much smoother lighting.

2008.04.25.02.png.thumb.jpg
Each colour value is multiplied by the dot product of the light beam and the surface normal.

That's better, but would look better with multiple lights. I start with a running total colour (initially black), then iterate over a list of lights. If the struck point is in line with the light, I add the surface diffuse colour multiplied by the light colour to this running total. This results in a much more interesting-looking scene.

2008.04.25.03.png.thumb.jpg
Multiple lighting test - two white and one red light, all in different positions.

The edges of the surfaces are rather ugly and noisy, probably due to rounding errors. They are helped if I offset the pixel coordinates by 0.5 (so the rays are shot through the centre of pixels, rather than the top-left corner), but proper supersampling would probably look better.

2008.04.25.04.png.thumb.jpg
Rendering with 4x supersampling.

It does, thankfully, at the expense of making the rendering time four times longer!

2008.04.25.05.png.thumb.jpg
Reintroduction of reflective surface support.

I reintroduced the reflective surfaces - those reflections don't look quite right to me, but I can't really tell. More detail in the world might make it easier, so I'd like to add some sort of texturing.

2008.04.25.06.png.thumb.jpg
The obligatory chessboard texture.

I'm undecided how to handle mapping struck points to texture coordinates. For the moment I'm just XORing together the x, y and z components together - if the least significant bit of the result is zero, return half of the diffuse colour, otherwise return the full diffuse colour.

Raytracing - Beware of the coder colours

Friday, 25th April 2008

As much as I claim to be interested in software rendering (be it as part of a game engine or as an effect in a demo), I've never actually written a raytracer. Having written some basic vector and plane arithmetic code for physics in the XNA Quake project, I thought I'd give it a stab.

SpheresAndPlanes.Thumb.png
No apologies made for the coder colours.

Currently, the world is just a simple List<WorldObject>, where each WorldObject has a Surface and Material property. The Surface has to implement IRayCollidable, which lets me call GetCollision(Ray) on it to find out where a ray strikes it (if at all), returning the point of collision and the normal of the surface that was hit. Currently, there are only two types that implement this interface - Plane and Sphere - but they'll do for testing.

For each ray, I iterate over the list of items in the world and grab the collision point. If a collision is made, I add the details to another list (including the total length of the ray at this point) and, if the surface's material is marked as reflective (ie, has a Reflectivity property greater than zero) I reflect the ray against the surface normal and cast again (recursively, so it's very easy to cause a StackOverflowException when two shiny surfaces are parallel to eachother).

Once I have a record of all the collisions, I sort them in back-to-front order based on the length of the ray, then iterate over them, blending the colours as I go (so a reflection in a green surface ends up being green tinted).

LitSpheresAndPlanes.Thumb.jpg
Marginally less garish.

To try and get a better sense of the 3D scene, I added a simple directional light. This simply takes the dot product of the hit surface normal and the light's direction, then multiplies it by the material's diffuse colour. The above screenshot has a light pointing directly away from the camera, hence the upper and left walls are completely black (however, the bottom and right walls, being reflective, are partially visible).

I've trying to do this without looking up the correct way of doing it, experimenting as I go - mainly in an attempt to try and patch up my rather poor handle on 3D maths and collision detection.

SC-3000 keyboard and a final release

Sunday, 20th April 2008

The latest addition to Cogwheel is SC-3000 keyboard emulation.

SC3000Keyboard.png

The SC-3000 was a home computer with similar hardware to the SG-1000 console, with the main addition of a keyboard. Software cartridges could add, for example, BASIC programming capabilities.

Due to lack of time and motivation, and the fact that the emulator is pretty much as good as I'm going to get it at this moment in time, I've removed the beta label and uploaded the latest version to its website.

3D glasses and CPU cycle counting

Monday, 7th April 2008

I reintroduced joystick support to the emulator front-end over the weekend, using a much more sensible input manager. The control editor form uses the same interface design for keyboard and joysticks - you click the button you wish to edit, it appears pressed, you press the key (or joystick button) you wish to bind to it and it pops back out again. It'll also check to see if you've moved the joystick in a particular direction on any of its reported axes.

Another feature I added was better support for games that used the 3D glasses, using a simple red-cyan anaglyph output blender.

MazeHunter3D.png    PoseidonWars3D.png

BladeEagle3D.png    SpaceHarrier3D.png

If the memory address range that the glasses respond to has been written to within the last three frames, it switches to the blending mode.

Irritatingly, with one glaring bug fix I managed to lower overall compatibility. Some instructions weren't being timed correctly (or at all) meaning that the emulator was executing too many instructions for the number of clock cycles it was asked to run. During the time each video scanline is run, 228 CPU cycles are executed. Upping this to 248 cycles (just for experimentation) fixes all known bugs - including the long-standing flickering pixels at the top of the screen in Cosmic Spacehead and display corruption in GP Rider. (However, digitised speech plays at a noticably higher pitch).

I'm not entirely sure why this is - it could be that some instructions are claiming to take too long, it could be yet another interrupt problem. To call this mildly frustrating is a bit of an understatement!

Page 25 of 53 121 22 23 24 25 26 27 28 2953

Older postsNewer postsLatest posts RSSSearchBrowse by dateIndexTags