Tuesday, 7th August 2007

Following sirob's prompting, I dropped the BasicEffect for rendering and rolled my own effect. After seeing the things that could be done with them (pixel and vertex shaders) I'd assumed they'd be hard to put together, and that I'd need to change my code significantly.

In reality all I've had to do is copy and paste the sample from the SDK documentation, load it into the engine (via the content pipeline), create a custom vertex declaration to handle two sets of texture coordinates (diffuse and lightmap) and strip out all of the duplicate code I had for creating and rendering from two vertex arrays.

2007.08.06.01.jpg   2007.08.06.02.jpg

public struct VertexPositionTextureDiffuseLightMap {

	public Xna.Vector3 Position;
	public Xna.Vector2 DiffuseTextureCoordinate;
	public Xna.Vector2 LightMapTextureCoordinate;

	public VertexPositionTextureDiffuseLightMap(Xna.Vector3 position, Xna.Vector2 diffuse, Xna.Vector2 lightMap) {
		this.Position = position;
		this.DiffuseTextureCoordinate = diffuse;
		this.LightMapTextureCoordinate = lightMap;

	public readonly static VertexElement[] VertexElements = new VertexElement[]{
		new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0),
		new VertexElement(0, 12, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0),
		new VertexElement(0, 20, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 1)

uniform extern float4x4 WorldViewProj : WORLDVIEWPROJECTION;

uniform extern texture DiffuseTexture;
uniform extern texture LightMapTexture;

uniform extern float Time;

struct VS_OUTPUT {
    float4 Position : POSITION;
    float2 DiffuseTextureCoordinate : TEXCOORD0;
    float2 LightMapTextureCoordinate : TEXCOORD1;

sampler DiffuseTextureSampler = sampler_state {
    Texture = <DiffuseTexture>;
    mipfilter = LINEAR;

sampler LightMapTextureSampler = sampler_state {
	Texture = <LightMapTexture>;
	mipfilter = LINEAR;

VS_OUTPUT Transform(float4 Position : POSITION, float2 DiffuseTextureCoordinate : TEXCOORD0, float2 LightMapTextureCoordinate : TEXCOORD1) {

    Out.Position = mul(Position, WorldViewProj);
    Out.DiffuseTextureCoordinate = DiffuseTextureCoordinate;
    Out.LightMapTextureCoordinate = LightMapTextureCoordinate;

    return Out;

float4 ApplyTexture(VS_OUTPUT vsout) : COLOR {
	float4 DiffuseColour = tex2D(DiffuseTextureSampler, vsout.DiffuseTextureCoordinate).rgba;
	float4 LightMapColour = tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).rgba;
    return DiffuseColour * LightMapColour;

technique TransformAndTexture {
    pass P0 {
        vertexShader = compile vs_2_0 Transform();
        pixelShader  = compile ps_2_0 ApplyTexture();

Of course, now I have that up and running I might as well have a play with it...


By adding up and dividing the individual RGB components of the lightmap texture by three you can simulate the monochromatic lightmaps used by Quake 2's software renderer. Sadly I know not of a technique to go the other way and provide colourful lightmaps for Quake 1. smile.gif Not very interesting, though.


I've always wanted to do something with pixel shaders as you get to play with tricks that are a given in software rendering with the speed of dedicated hardware acceleration. I get the feeling that the effect (or a variation of it, at least) will be handy for watery textures.

float4 ApplyTexture(VS_OUTPUT vsout) : COLOR {
	float2 RippledTexture = vsout.DiffuseTextureCoordinate;
	RippledTexture.x += sin(vsout.DiffuseTextureCoordinate.y * 16 + Time) / 16;
	RippledTexture.y += sin(vsout.DiffuseTextureCoordinate.x * 16 + Time) / 16;
	float4 DiffuseColour = tex2D(DiffuseTextureSampler, RippledTexture).rgba;
	float4 LightMapColour = tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).rgba;
	return DiffuseColour * LightMapColour;

My code is no doubt suboptimal (and downright stupid).

Naturally, I needed to try and duplicate Scet's software rendering simulation trick. smile.gif

The colour map (gfx/colormap.lmp) is a 256×64 array of bytes. Each byte is an index to a colour palette entry, on the X axis is the colour and on the Y axis is the brightness: ie, RGBColour = Palette[ColourMap[DiffuseColour, Brightness]]. I cram the original diffuse colour palette index into the (unused) alpha channel of the ARGB texture, and leave the lightmaps untouched.

float2 LookUp = 0;
LookUp.x = tex2D(DiffuseTextureSampler, vsout.DiffuseTextureCoordinate).a;
LookUp.y = (1 - tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).r) / 4;
return tex2D(ColourMapTextureSampler, LookUp);
2007.08.06.06.jpg   2007.08.06.07.jpg

2007.08.06.08.jpg   2007.08.06.09.jpg

As I'm not loading the mip-maps (and am letting Direct3D handle generation of mip-maps for me) I have to disable mip-mapping for the above to work, as otherwise you'd end up with non-integral palette indices. The results are therefore a bit noisier in the distance than in vanilla Quake, but I like the 8-bit palette look. At least the fullbright colours work.

FirstPreviousNextLast RSSSearchBrowse by dateIndexTags