Integrating the dsPIC33 VDC with the Z80 computer

Saturday, 31st July 2010

The ultimate goal for the video display controller module I have been working on is to drive the display in my Z80 computer project. As I have now got a pretty good set of features I thought it would be a good idea to join the two projects together.

Z80 computer with dsPIC33 VDC

The big board in the lower middle of the above photograph is the main body of the computer, including the Z80, its RAM, the ATmega644P that is used to handle I/O, an SD card for storage and a DS1307 real-time clock. The small board in the bottom left of the photo is the power supply (supplying both 5V and 3.3V) and clock generator (providing a 20MHz and 10MHz clock).

At the top of the photo is the video display controller, connected to a 320×240 graphical LCD. A pin header is used to connect this VDC board to the rest of the computer. Three pins are required for power; 0V, 3.3V (dsPIC33 and output buffer) and 5V (LCD). The VDC is connected to the computer's ATmega644P I/O controller using the two-wire I2C bus (the same bus that is used to access the DS1307 clock). Rather than run a series of graphical demos, the VDC now waits for commands to be written to the I2C slave address 0xEE which it acts on to control what is shown on the screen. I'm aiming for these commands to work in the roughly same way as they did on the BBC Micro VDU, which should make porting the enhanced TI-83+ version of BBC BASIC to this computer a bit easier. The BBC Micro's VDU could be accessed by calling OSWRCH (assuming it was being used as the current output stream), which typically has an address of &FFEE — hence my choice of 0xEE as the I2C slave address!

Detail of the LCD connected to the VDC

A handful of these VDU commands have been implemented, which is sufficient to run simple CP/M software. The generic CP/M version of BBC BASIC does not, naturally, support any hardware-specific features and as such lacks advanced text or drawing support (one can send commands directly to the output stream with the VDU statement but this isn't very user-friendly). I will need to work on this now that the hardware is coming together! The current VDC code can be downloaded here if you are interested in the changes that have been made.

The above photo shows the newly constructed VDC hardware. All of my previous projects have been assembled on stripboard; as the projects have become more complex or simply smaller I've found stripboard to be increasingly awkward to work with. ICs can only really be orientated in one direction, and to reduce the size of circuits I've had to start cutting the tracks between holes (rather than the usual method which is to drill out an entire hole). The supplier I normally acquire parts from, Bitsbox, recently added three different sizes of perfboard to their catalogue so I thought I'd give it a go. I've found it much more pleasant to work with than stripboard, though not as easy to correct if you make a mistake and need to desolder a connection. You can certainly perform some interesting space-saving tricks on the underside of the board!

The underside of the VDC showing the soldering technique

The Kynar insulation on the wire I switched to using also has the advantage of not melting when heated with a soldering iron, as I've had problems in previous projects where tightly-spaced wires will end up getting shorted together as the insulation between them melts.

I have mentioned that one pin header is used to connect the VDC to the computer. There are three others on the board; the two-pin one is for the composite video output, the six-pin one is for connection to a PICkit to reprogram the dsPIC and the four-pin one for the VGA output.

Detail of VGA output from the VDC

Now that I have moved the VDC onto a permanent circuit board I feel that I can start moving the rest of the computer in the same direction. The software is far from complete and the hardware is pretty rudimentary but it does basically work and having a more robust system to work on should make life a bit easier.

VGA output for the dsPIC33 VDC

Sunday, 25th July 2010

I have spent quite a while working on different projects that generate PAL video signals in software. This may seem a bit odd if you consider the fact that I don't own a TV, so tend to rely on a video capture card or VGA box to see the output of these projects on a computer monitor — something I do have a fair number of.

This reliance on another piece of technology between my project and the display device is not something I'm too keen on, so have spent some time adding native 640×480 60Hz VGA output to my dsPIC33 video display controller.

VGA monitor showing the output of the dsPIC33 VDC

Another advantage of using a VGA monitor directly is that individual pixels are shown very crisply, unlike my video capture card or VGA box which tend to blur the image horizontally. This is shown in the zoomed in part of the above photo.

Generating a video signal for a VGA monitor is easier than generating a composite video signal for a PAL TV, as there are distinct pins for the image data, horizontal sync and vertical sync. One problem I did have, however, is with the length of the vertical sync pulse. I started with a very brief pulse (the same duration as the horizontal sync pulse) which worked fine with my old analogue CRT monitors but didn't work at all with my modern LCD monitor. The documentation I was using for timing information indicated that there were "two scanlines" for vertical sync so I extended the pulse to last for those two frames, which worked on the LCD but didn't on the CRTs. My final compromise has been to assert the vertical sync pin for the duration of a single scanline, which seems to work on all of my monitors.

dsPIC33 VDC on a breadboard

When connected to a TV two microcontroller pins are used to drive a single load (composite input). When connected to a VGA monitor, however, a single microcontroller pin is used to drive three loads (red, green and blue inputs). I thought it prudent to check the datasheet for the dsPIC before connecting this increased load to the output pin where I was surprised to discover that the maximum source or sink current for each output pin is a measly 4mA — not even enough to drive an LED! I have added a buffer to each video output pin to protect the dsPIC — any buffer capable of sourcing up to 30mA or so should be sufficient (I'm using a 74F125, which can be seen in the bottom right of the above photo). I had previously been occasionally using the video output pins as inputs to check if there is a load on the output or not (such a load would indicate whether a TV or VGA monitor is plugged in or not) but I can no longer do this with the external buffer IC so have had to revise the circuit somewhat. Updated source code featuring the new VGA output code and an accompanying schematic are available for those who are interested!

Text and filled shapes for the dsPIC33 VDC

Thursday, 22nd July 2010

The dsPIC33 video display controller project I am working on needs to support several common text output and drawing operations offered by existing BBC BASIC implementations. The previous demo included basic point, line and circle outlining functions, but I also need to output text and outline (or fill) rectangles, circles, ellipses and triangles. On top of that the drawing operations need to support multiple colours and plotting modes. Owing to processing power and memory limitations the output is black and white only but different "shades" can be implemented with dither patterns. The plotting modes allow you to perform logical operations between what you are drawing and what's currently on the buffer — for example, you could fill a circle that is logically ORed with the existing background or draw a line that inverts every pixel along its length rather than applying the new colour.

dsPIC33 VDC text output demo
Filled rectangles and text output produce the above image.

Finding suitable algorithms for some of these routines has been a little tricky at times. Due to the way that filled shapes can be set to invert (rather than overwrite) what's on the background there has to be zero overdraw and the outline of filled triangles should exactly match the outline of a triangle drawn by plotting a line between its three vertices; this makes combining triangles to form more complex shapes possible, as you can guarantee that the overlap between the two shared vertices of a pair of triangles covers the same pixels as a line drawn between those two vertices.

dsPIC33 VDC spinning cube demo
Filled triangles produce a solid cube.

I ended up writing a program in C# that would plot a random triangle using the triangle filler I was attempting to write and then compare its outline to that of a triangle drawn by plotting lines between the three vertices. The final code is chock full of special cases and workarounds but has been tested against hundreds of thousands of random triangles and seems to be working!

Due to a shortage of memory there is only a single frame buffer, which (naturally) means there is no double-buffering and hence smooth animation becomes a little tricky. When connected to a TV one can take advantage of the vertical blanking period to update the buffer (this is a period below and above the active display where you only need to feed sync signals, not image data, to the TV) and still get decent effects as long as you don't try to do too much. The LCD has no such vertical blanking period and so some of the demos look rather flickery.

I have captured a video of the output of the circuit when running the demo which can be seen above. The horizontal grey lines are a limitation of my video capture card; these lines appear correctly as alternating black and white pixels on a real TV set! You can download the code for this demo from my site along with a PDF of the schematic. As this is a work in progress I'm sure there are plenty of bugs left to squash but I think it's getting there, slowly but surely!

360 degree photos from Lego, a PICAXE, C# and JavaScript

Friday, 9th July 2010

As you may have guessed from the ratio of photos to actual content in my entries I do quite enjoy taking photos of things. One of the reasons I enjoy working with electronics over writing software for computers is that a finished product results in something physical, which I find much more rewarding than a purely virtual hobby.

One type of photograph I particularly enjoy on other websites is the interactive 360° view of a product. The ability to click and drag to rotate an object on the screen makes it seem more real.

What do you need to take this sort of photograph and show it on a web page? There are four components I could think of:

  1. A rotating platform that could be controlled to rotate to a specific angle.
  2. A fixed camera that can be triggered once the platform has advanced to the correct angle.
  3. A way to combine all of the photos taken at different angles into a single file.
  4. An piece of code that would allow the user to rotate the object on-screen and display the correct single view of the object.

My final solution is a bit of a Heath Robinson affair but it seems to work quite well!

The rotating platform

The most obvious way to build such a platform is to use a stepper motor, as that is specifically designed to be positioned to a particular angle. The problem is that I don't have any stepper motors, and even if I did it would be quite tricky to connect one to a platform. A more practical alternative is to use something I do have — Lego Technic.

360° photo hardware built out of Lego Technic pieces

A Lego motor cannot be set to rotate to a particular position, so some additional electronics are required. The motor drives a worm gear which in turn rotates a three-bladed propeller relatively slowly (shown with red pieces attached to it in the photo). This propeller cuts the path of a beam of infra-red light between an LED and an infra-red receiver module. A microcontroller (in this case, a PICAXE-08M) is used to advance the platform in steps by switching the motor on, waiting for the beam to be unblocked, waiting for the beam to be blocked again then switching the motor off. The gears I am using have twenty-four or eight teeth, so each pair of gears divides the rotational speed by 24/8=3. I am using four pairs of gears which results in a division of 34=81. The propeller has three blades which further divides the rotational speed by three resulting in the ability to set the platform to 81×3=243 distinct angles.

' This code is for a PICAXE-08M
#PICAXE 08M

' This pin is used to generate the 38kHz IR carrier. It should be connected to the IR LED's cathode (-).
Symbol IRPwmPin = 2
' This pin is connected to the IR demodulator's output.
Symbol IRReceiverPin = Pin3

' This pin is connected to the motor enable output.
Symbol MotorPin = 4

Symbol SerialControlIn = 1

' The desired position of the "stepper" motor.
Symbol StepDesired = B8
' The current position of the "stepper" motor.
Symbol StepCurrent = B9

Symbol StepDesiredConfirm = B10
Symbol StepDesiredPotential = B11

' Returned from the CheckBeam routine.
Symbol BeamBlocked = B12

' Rather than spin once at a time (slow) spin up to this many times between exchanging position information with the computer.
Symbol SpinLoopCount = 3

' Stores the spin loop time.
Symbol SpinLoop = B13

' The number of steps in a complete revolution.
Symbol TotalSteps = 243


Main:
    
    ' Reset the current and desired steps.
    StepDesired = 0
    StepCurrent = 0
    
    ' Switch the motor off.
    Low MotorPin
    
    'StepDesiredConfirmCount = 0
    
    Do
        ' Fetch the desired position.        
        SetFreq M8
        SerIn SerialControlIn, N4800_8, (CR, LF), #StepDesiredPotential, #StepDesiredConfirm
        SetFreq M4
        
        ' Check the received data - the second value should be the logical inversion of the first.
        StepDesiredConfirm = Not StepDesiredConfirm
        If StepDesiredPotential = StepDesiredConfirm Then
            StepDesired = StepDesiredPotential
        End If
        
        
        ' Adjust the position if required.
        For SpinLoop = 1 To SpinLoopCount
        
            ' Broadcast the current step position.
            SerTxd(#StepCurrent, ",", #StepDesired, CR, LF)
        
            ' Do we need to run the motor?
            If StepCurrent <> StepDesired Then
                
                ' Switch the motor on.
                High MotorPin
                Pause 20
                
                ' Wait for the beam to be unblocked.
                Do GoSub CheckBeam
                Loop Until BeamBlocked = 0
                
                
                Pause 20
                
                ' Wait for the beam to become blocked again.
                Do GoSub CheckBeam
                Loop Until BeamBlocked = 1
                
                ' Switch the motor off.
                Low MotorPin
                
                ' Increment step current to indicate a change of step.
                Inc StepCurrent
                If StepCurrent = TotalSteps Then
                    StepCurrent = 0
                End If
            End If

        
        Next SpinLoop
    
    Loop
    
' Checks whether the beam is blocked or not.
' Returns BeamBlocked = 0 for an unblocked beam, BeamBlocked for a blocked beam.
CheckBeam:
    PwmOut IRPwmPin, 25, 53 ' 38kHz, calculated via PICAXE->Wizards->pwmout
    Pause 1
    BeamBlocked = IRReceiverPin    
    PwmOut IRPwmPin, Off
    Return

The BASIC program on the PICAXE constantly outputs the current position and desired position via the serial programming cable as ASCII in the format <current>,<desired><CR><LF>. It also checks for the desired position every loop on via a serial input pin (sadly not the one used for programming the PICAXE as that is not permitted on the 08M) in the format <CR><LF><desired>,<~desired>. (again in ASCII). The desired position is transmitted twice, once normally and the second time inverted (all zero bits set to one and all one bits set to zero) as a simple form of error detection; should the second value received not be a logical inversion of the first then the value is discarded.

A copy of the schematic can be downloaded by clicking the above thumbnail. It is pretty simple; serial data is input on pin IN1 (move the serial input from the programming cable from SERIAL_IN to IN1), an IR LED is driven from pin PWM2 via a current-limiting resistor, an IR receiver sends its input to pin IN3, a Darlington pair drives the motor via pin OUT4 and information is sent out via the SERIAL_OUT pin (no need to move the programming cable for that one).

Triggering the camera

My camera does not have a standard remote control, but does has some software that allows you to capture shots when it's connected to your USB port. Unfortunately the Canon PowerShot SDK is rather old and is no longer maintained, which means that any software that uses it is bound to its bugs and limitations. One of its bigger problems is that it doesn't work on Vista; by setting the Remote Capture utility into XP compatibility mode I could set up a shot and see a live viewfinder but attempting to release the shutter caused the app to hang for about a minute before claiming the camera had been disconnected.

Fortunately VirtualBox emulates USB and serial ports so I set up Windows XP in a virtual machine and installed the Remote Capture utility. It still doesn't work very well (taking about thirty seconds between releasing the shutter and transferring the image) but it's better than nothing.

To control platform I use the following C# code. It's very poorly written (you need to make sure that you quickly set the Remote Capture application as the foreground window when you start it, for example, and it has a hard-coded 10 second delay after taking the photo to transfer the photo from the camera to the PC — when my camera's batteries started going flat it started to drop frames).

using System;
using System.Globalization;
using System.IO.Ports;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading;
using System.Windows.Forms;
using System.Diagnostics;
using System.Linq;

class Program {

    const int StepsInRevolution = 243;

    enum ApplicationState {
        AligningStepper,
        WaitingStepperAligned,
        WaitingStartPistol,
        Photographing,
        Exiting,
    }

    static void Main(string[] args) {
        StringBuilder receivedData = new StringBuilder();
        using (var serialPort = new SerialPort("COM1", 4800, Parity.None, 8, StopBits.Two)) {
            serialPort.WriteTimeout = 1;

            serialPort.Open();

            var packetFieldsRegex = new Regex(@"^(\d+),(\d+)$");

            int? currentPosition = null;
            int desiredPosition = 0;
            int? confirmedDesiredPosition = null;
            int startPosition = 0;

            int angleCount = 64;
            int currentAngle = 0;

            serialPort.DataReceived += new SerialDataReceivedEventHandler((sender, e) => {
                if (e.EventType == SerialData.Chars) {
                    receivedData.Append(serialPort.ReadExisting());
                    string receivedDataString;
                    int newLinePosition;
                    while ((newLinePosition = (receivedDataString = receivedData.ToString()).IndexOf("\r\n")) != -1) {
                        var packet = receivedDataString.Substring(0, newLinePosition);
                        receivedData = receivedData.Remove(0, packet.Length + 2);
                        var packetFields = packetFieldsRegex.Matches(packet);
                        if (packetFields.Count == 1) {
                            currentPosition = int.Parse(packetFields[0].Groups[1].Value, CultureInfo.InvariantCulture);
                            confirmedDesiredPosition = int.Parse(packetFields[0].Groups[2].Value, CultureInfo.InvariantCulture);
                        }
                    }
                }
            });

            ApplicationState appState = ApplicationState.AligningStepper;

            // Main loop.
            while (appState != ApplicationState.Exiting) {
                // Update the stepper position.
                try {
                    serialPort.Write(string.Format(CultureInfo.InvariantCulture, "\r\n{0},{1}.", desiredPosition, (byte)~desiredPosition));
                } catch (TimeoutException) {
                    serialPort.DiscardOutBuffer();
                }
                Thread.Sleep(10);
                // What are we doing?
                switch (appState) {
                    case ApplicationState.AligningStepper:
                        if (currentPosition.HasValue) {
                            desiredPosition = (currentPosition.Value + 5) % StepsInRevolution;
                            appState = ApplicationState.WaitingStepperAligned;
                        }
                        break;
                    case ApplicationState.WaitingStepperAligned:
                        if (currentPosition.Value == desiredPosition) {
                            startPosition = desiredPosition;
                            appState = ApplicationState.WaitingStartPistol;
                            //while (Console.KeyAvailable) Console.ReadKey(true);
                            //Console.WriteLine("Press any key to start rotating...");
                        }
                        break;
                    case ApplicationState.WaitingStartPistol:
                        //while (Console.KeyAvailable) {
                        //  Console.ReadKey(true);
                            appState = ApplicationState.Photographing;
                        //}
                        break;
                    case ApplicationState.Photographing:
                        if (currentPosition == desiredPosition) {
                            Console.Write("Taking photo {0} of {1}...", currentAngle + 1, angleCount);
                            SendKeys.SendWait(" ");
                            Thread.Sleep(10000);
                            Console.WriteLine("Done!");
                            if (currentAngle++ == angleCount) {
                                appState = ApplicationState.Exiting;
                            } else {
                                desiredPosition = (startPosition + (currentAngle * StepsInRevolution) / angleCount) % StepsInRevolution;
                            }
                        }
                        break;
                }
            }

            Console.WriteLine("Done.");
            Console.ReadKey(true);
        }
    }
}

It was meant to prompt to press a key before starting to allow you to re-align the object to the starting position (if required) but this would switch focus away from the Remote Capture utility. I'll probably fix this to switch the focus explicitly to the Remote Capture utility before sending the key to trigger a capture, and will also add code that polls the photo destination directory to spot when the file has been downloaded from the camera instead of the hard-coded 10 second delay. Working in the virtual machine and with the buggy Remote Capture utility is a frustrating endeavour so I left it as it is for the time being!

Stitching the photos together

Once the photos had been taken they needed to be stitched together into a single file. I decided to use 64 angles for a complete revolution as this seemed a good trade-off between fine control over rotation and a decent file size. It also allowed the images to be arranged into a neat 8×8 grid.

I first used VirtualDub to crop each image. VirtualDub allows you to open an image sequence and export to an image sequence so it seemed ideal for the task. Once I had the object neatly cropped I stitched all of them together into a large single PNG file using the following C# program:

using System;
using System.Drawing;
using System.IO;
using System.Text.RegularExpressions;

class Program {
    static void Main(string[] args) {
        var middleImage = 14; // Index of the "middle" (default angle) image.
        var nameRegex = new Regex(@"Processed(\d{2})");
        var images = new Bitmap[64];
        try {
            foreach (var file in Directory.GetFiles(@"D:\Documents\Pictures\Digital Photos\Projects\Line Blanker\Insides 360\Processed", "*.png")) {
                var matches = nameRegex.Matches(file);
                if (matches.Count == 1) {
                    images[int.Parse(matches[0].Groups[1].Value)] = new Bitmap(file);
                }
            }
            var maxSize = new Size(0, 0);
            for (int i = 0; i < images.Length; i++) {
                if (images[i] == null) {
                    Console.WriteLine("Image {0} missing!", i);
                } else {
                    maxSize = new Size(Math.Max(images[i].Width, maxSize.Width), Math.Max(images[i].Height, maxSize.Height));
                }
            }
            using (var finalImage = new Bitmap(maxSize.Width * 8, maxSize.Height * 8)) {
                using (var g = Graphics.FromImage(finalImage)) {
                    g.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.Half;
                    for (int x = 0; x < 8; ++x) {
                        for (int y = 0; y < 8; ++y) {
                            var image = images[(x + y * 8 + middleImage) % images.Length];
                            if (image != null) {
                                g.DrawImage(image, new Point(x * maxSize.Width + (maxSize.Width - image.Width) / 2, y * maxSize.Height + (maxSize.Height - image.Height) / 2));
                            }
                        }
                    }
                }
                finalImage.Save("out.png");
            }
        } finally {
            for (int i = 0; i < images.Length; i++) {
                if (images[i] != null) {
                    images[i].Dispose();
                    images[i] = null;
                }
            }
        }
    }
}

The program requires that the input images are named Processed00.png to Processed63.png, which is easily arranged when exporting an image sequence from VirtualDub. The resulting image can be tidied up in a conventional image editor.

Resulting image grid

Embedding the result on a web page

The final bit of code required is to allow the 360° image to be embedded and manipulated on a web page. I opted to use JavaScript for this task as it seemed the lightest and simplest way to work.

if (typeof(Rotate360) == 'undefined') {
    var Rotate360 = new Class({
        Implements : [Options, Events],
        options : {
            width : 320,
            height : 240,
            container : null,
            element : null
        },
        sign : function(v) {
            return (v > 0) ? +1 : (v < 0 ? -1 : 0);
        },
        initialize : function(source, options) {
            this.setOptions(options);
            this.source = source;
            var rotate360 = this;
            this.element = new Element('div', {
                'class' : 'rotate360',
                styles : {
                    width : this.options.width + 'px',
                    height : this.options.height + 'px',
                    background : 'transparent no-repeat url("' + this.source + '") scroll 0 0'
                },
                events : {
                    mouseenter : function(e) {
                        if (typeof(rotate360.mouseHandlerDiv) != 'undefined') {
                            var myPosition = rotate360.element.getCoordinates();
                            rotate360.mouseHandlerDiv.setStyles({
                                left : myPosition.left + 'px',
                                top : myPosition.top + 'px',
                                width : myPosition.width + 'px',
                                height : myPosition.height + 'px'
                            });
                        }
                    }
                }
            });
            this.mouseHandlerDiv = new Element('div', {
                styles : {
                    position : 'absolute',
                    cursor : 'e-resize'
                },
                events : {
                    mousemove : function(e) {
                        if (typeof(rotate360.mouseHeld) != 'undefined' && rotate360.mouseHeld && typeof(rotate360.previousPageX) != 'undefined' && typeof(rotate360.previousPageY) != 'undefined') {                        
                            var currentBackgroundPosition = rotate360.element.getStyle('background-position').split(' ');
                            currentBackgroundPosition[0] = parseInt(currentBackgroundPosition[0]);
                            currentBackgroundPosition[1] = parseInt(currentBackgroundPosition[1]);
                            if (typeof(rotate360.rotateX) == 'undefined') rotate360.rotateX = 0;
                            rotate360.rotateX += (e.page.x - rotate360.previousPageX) / (360 * (rotate360.options.width / 270) / ((rotate360.image.width * rotate360.image.height) / (rotate360.options.width * rotate360.options.height)));
                            var workingAngle = parseInt(rotate360.rotateX);
                            currentBackgroundPosition[0] = -rotate360.options.width * (workingAngle % (rotate360.image.width / rotate360.options.width));
                            currentBackgroundPosition[1] = -rotate360.options.height * Math.floor(workingAngle / (rotate360.image.height / rotate360.options.height));                            
                            while (currentBackgroundPosition[0] > 0) currentBackgroundPosition[0] -= rotate360.image.width;
                            while (currentBackgroundPosition[0] <= -rotate360.image.width) currentBackgroundPosition[0] += rotate360.image.width;
                            while (currentBackgroundPosition[1] > 0) currentBackgroundPosition[1] -= rotate360.image.height;
                            while (currentBackgroundPosition[1] <= -rotate360.image.height) currentBackgroundPosition[1] += rotate360.image.height;
                            rotate360.element.setStyle('background-position', currentBackgroundPosition[0] + 'px ' + currentBackgroundPosition[1] + 'px');
                            rotate360.previousPageX = e.page.x;
                            rotate360.previousPageY = e.page.y;
                        } else {
                            rotate360.previousPageX = e.page.x;
                            rotate360.previousPageY = e.page.y;
                        }
                    },
                    mousedown : function(e) {
                        e.stop();
                        rotate360.mouseHeld = true;
                        rotate360.mouseHandlerDiv.setStyles({
                            left : 0,
                            width : '100%'
                        });
                    },
                    mouseup : function(e) {
                        e.stop();
                        rotate360.mouseHeld = false;
                        rotate360.element.fireEvent('mouseenter');
                    }
                }
            }).inject(document.body, 'top');
            this.image = new Asset.image(this.source, {
                onload : function() {
                    if (rotate360.options.element) {
                        rotate360.element.replaces(rotate360.options.element);
                    } else if (rotate360.options.container) {
                        rotate360.options.container.adopt(rotate360.element);
                    }
                }
            });
        }
    });
    window.addEvent('domready', function() {
        $$('img.rotate360').each(function(rotate360) {
            var src = rotate360.src.replace(/\.([a-zA-Z]+)$/, '_360.$1');
            var img = new Asset.image(src, {
                onload : function() {
                    new Rotate360(img.src, {
                        width : rotate360.width,
                        height : rotate360.height,
                        element : rotate360
                    });
                }
            });
        });
    });
}

The above code requires MooTools (both "core" and "more" for its Asset classes). It can be invoked manually or (preferably) will replace any image with a class of rotate360 with the 360° version — if the file was example.jpg the 360° version would be example_360.jpg.

Examples

I've taken photos of a few of my previous projects using this technique — USB remote control, AVR TV game and VGA line blanker. The process could use some refinement but it certainly seems to work!

dsPIC33 VDC with GLCD or PAL TV output

Sunday, 4th July 2010

I have currently been using some terminal emulation software on my PC to see the output of the Z80 computer. It seems a little silly to rely on a large multi-gigahertz, multi-megabyte machine just to display the output from a machine at the megahertz and kilobyte end of the scale. I had previously done some work with a dsPIC33 to drive a 320×240 pixel graphical LCD so dug out its breadboard and dusted off the code to try to make something of it.

Inspired by John Burton's recent experiments with PAL TV output I decided that the first thing I should do is add support for TV output. The graphical LCD is nice but a little small and responds to pixel changes rather slowly, making animation very blurry.

dsPIC33 VDC demo

I think the results are reasonably good. A lot of the code is shared with the old LCD driving code, which means that the LCD demos work fine with the TV too. Fortunately, retracing the TV is a much less CPU-intensive job than retracing the LCD. The PIC has an SPI peripheral that allows you to clock out eight or sixteen bits a bit at a time at a selected speed by writing to a single register, which is great for clocking out the pixel data on each scanline. Even better are the PIC's DMA channels, which allow you to output a selected number of bytes or words to a selected peripheral from a specified location in RAM with no CPU involvement; all I need to do on each line is to copy a complete scanline to the DMA memory, initiate a transfer from this memory to the SPI peripheral and the job is as good as done. Using the DMA hardware as opposed to writing to the SPI registers directly reduced the rendering time of the Mandelbrot fractal part of the demo from 33 seconds to 18 seconds.

One problem I haven't been able to resolve is that the PIC inserts a small delay between every DMA/SPI transfer, which results in every sixteenth pixel being a bit wider than the fifteen before it. This is especially noticed on dithered regions. If I write to the SPI registers directly this delay vanishes. I'm not sure if the picture quality increase is worth the loss of performance, so I'd rather find a proper fix for this! For the time being, here's a video of the demo as it currently runs:

The TV contains a 75Ω resistor to ground on its composite video input. Two resistors are used on two PIC pins to form a voltage divider to produce the required output voltages (0V for sync, 0.3V for black and 1V for white). When the TV is disconnected the output of the circuit is 3.3V (the supply voltage, equivalent to a logic "high") as there's no load resistance to pull it to the correct 0.3V (a logic "low"). This can be used to periodically check whether a TV is connected and to switch between the LCD and TV output modes.

The above is rather vague, and I would recommend Rickard Gunée's article entitled How to generate video signals in software using PIC for more detailed information! The code for the demo can be downloaded from my website for those who are interested.

Update: I've updated my code to use the SPI peripheral in slave mode and use a timer and output compare unit to generate the clock signal. This regular clock signal produces pixels of identical sizes — the new code can be downloaded here.

Page 4 of 44 1 2 3 4 5 6 7 844

Older postsNewer postsLatest posts RSSSearchBrowse by dateIndexTags