Building a VGA line blanker and 3D glasses driver

Monday, 15th February 2010

Assembling a circuit on breadboard is a good way to experiment with electronics, but the result is not something you could really use – it's bulky, fragile and awkward to set up. It's far nicer to solder the components of the circuit together to form a more permanent device and put it in a enclosure to make it robust. This is not something I'm especially good at, but something I thought I'd try with the VGA line blanker and LCD shutter glasses controller I've been experimenting with recently.

VGA line blanker and LCD shutter glasses controller

In the past I've struggled along with a hand drill and the nail file on a Swiss Army knife, but have more recently acquired a high-speed rotary tool and an assortment of attachments which make things much easier. I took some photos when building this project, which I've documented below; I'm not sure my techniques are very efficient, but I do get there in the end. I'd be very glad to hear any advice anyone has!

A plain project box Back of the project box marked for cutting

I started with a plain project box. Having planned roughly where I was going to put the VGA ports and DC power socket, I covered one side of the box in masking tape and drew on where I was going to put the holes.

VGA socket holes drilled out VGA socket holes roughly cut

To cut straight-edged holes, such as those required for a D-subminiature connector, I drill a hole in each corner and use a small cylindrical burr to cut between the holes. This leaves a very rough edge, but is a good start.

Upper VGA socket hole widened sufficiently to accomodate a VGA connector

I then widen the hole using a large cylindrical burr and a needle file until the part I'm attempting to mount fits snugly.

Holes for the jack post marked Both VGA socket holes widened, with holes for the jack posts drilled

When I had both VGA connectors in place, I marked and drilled the holes for the jack posts that the VGA leads will screw into. Neither hole is especially neatly cut, but the D-subminiature connector overlaps the hole sufficiently to hide any shoddy workmanship.

Hole for the power socket drilled All of the sockets installed in the back of the project box

The last part of the back is the DC power socket. As I don't have a drill bit large enough to cut the hole on its own, I drill it as large as I can then widen it using the cylindrical burrs mentioned before. With all of the holes cut, I inserted the components to see how they look and identified one problem – I'd underestimated how fat the connectors on the end of VGA leads are. Fortunately, I have a slim VGA cable that fits, but a regular sized one does not – in future I'll need to remember to put the VGA connectors further apart!

Hole for the LCD glasses socket cut 3.5mm stereo jack socket for the LCD glasses installed

With that mistake fresh in my mind, I thought I'd move onto something a bit more difficult to get wrong – the 3.5mm stereo jack on the front of the box to plug the glasses into. This is just another round hole, cut in the same way as the DC power socket.

Holes for the control switches marked Holes for the control switches cut

The two control switches on the top of the box require much larger holes. These were cut in the same way as before – a small hole is gradually widened by using a cylindrical burr. This is a very tedious job, not helped by having to keep stopping to clean the melted plastic that adheres to the burr.

Control switches installed

Finally, the switches were installed. I was originally going to use latching push buttons, but had previously used those nice round rocker switches as the power switch on the AVR TV Game project so opted to use them instead.

Stripboard cut to fit Stripboard installed in enclosure

The final bit of physical work was to cut some stripboard down to size to fit inside the enclosure. These were cut by first scoring along the tracks where the cut was to be made, then snapping the board over the edge of a table. This results in a clean break, but to ensure a snug fit the boards were tidied up with a sanding drum. The lid (or, in my case, base) of the enclosure has a raised edge that fits inside the box, so the sanding drum was also used to remove two of the corners of the stripboard pieces to allow the base to fit.

Preliminary tracks cut in stripboard

The next stage was to move onto the electronics, and I started with the circuit board that was to host the video amplifier IC, voltage regulator and Schmitt trigger on vsync/hsync. The video amplifier is attached to a TSSOP14 adaptor that has a D-shaped pin configuration, with two rows of four pins and two rows of three pins. Having cut through the tracks in the stripboard to mount the amplifier, I needed to find some suitable pin sockets.

8-way pin socket cut in two 8-way pin socket cut in two and neatened up

As I don't have any pin sockets with just three pins in them (only two, four and eight) I cut two eight-way pin sockets in two with a pair of wire cutters then tidied up the ragged edges with a sanding drum and needle file.

Video amplifier socket soldered in place

With the pin sockets soldered in place you can see the D shape I mentioned above. I don't generally plan stripboard circuits very thoroughly, preferring to start by placing large components in approximately the right location with respect to where the external connectors are and how they need to relate to other components. Once those are in place I add smaller components (such as discrete resistors or capacitors) before finishing by adding the wire links to connect all of the parts together. This does lead to situations where I wish that I'd placed a component one hole along to give myself more space or to avoid having to insert so many wire links, but it generally works.

Stripboard with cuts between holes Video amplifier resistors in place

With the video amplifier in position, I added the resistors that are required on its inputs and outputs. To keep the circuit reasonably compact I cut through stripboard tracks between the holes using a conical HSS burr with a small tip – this is an especially useful tool when you need to deal with double-row pin sockets

Power supply support components Socket for Schmitt trigger IC and pin headers for vsync/hsync jumpers

I then added the support circuitry for the voltage regulator (smoothing capacitors and a rectifier diode to protect the circuit if the polarity of the power supply is incorrect) and a socket for the Schmitt trigger IC. I find the easiest way to keep components in place on any sort of through-hole board is to tape them down firmly with masking tape before soldering – bending the legs out makes the parts much harder to remove if you make a mistake. Blu-Tack is easier to use but has a habit of melting when soldering and leaving an unpleasant blue residue on your circuit, so I'd advise against it! To make this part of the circuit slightly more future-proof a pair of jumpers are used to connect the sync lines (vsync and hsync) from the VGA input and VGA output together. These could be removed if I decided to change the logic board to override these signals – for example, as part of a sync-doubler, which injects a vsync pulse half way down the screen.

Connector between the video amplifier circuit board and the rest of the system Populated video amplifier circuit board

I finally added the bulkiest components; the 5V regulator and the pin header to connect the upper and lower boards together. Soldering pin headers to the underside of a board is a fiddly job, but is required in this instance to connect the bottom of the upper board to the top of the lower board.

Top view of the populated video amplifier circuit board Video amplifier circuit board installed in the enclosure

With the upper board completed it was time to put it into the enclosure and solder the VGA connectors and DC power socket to it. This is the part I least enjoy.

VGA connectors with stranded wires attached

I started by soldering some stranded wire to the VGA connectors. Most of the wires are the same length, as they are required to carry signals to and from the circuit, but some wires are shorter and only connected to one of the VGA connectors. These are the white, yellow, orange and brown wires in the above photo, and these are attached to pins used to exchange information between the PC and the monitor (e.g. supported resolutions and refresh rates). As we're not interested in these, they're connected straight through from one connector to the other.

Pins used for monitor identification passed through hole in enclosure Both VGA connectors installed

I inserted the VGA connector with these identification pins into the top hole, passed the shorter identification wires through the other and soldered them to the second VGA connector. This leaves the red, green, blue, vsync, hsync and ground pins loose inside, ready to be connected to the upper circuit board.

DC power socket

The DC power socket also needs to be connected to the circuit board, but at only two wires that's a much simpler job.

External connectors soldered to the video amplifier circuit board Video amplifier circuit board hooked up and installed in the case

All of the loose leads are soldered onto the circuit board and stripboard is slotted into place inside the enclosure. The wires could be shorter, but that would have made soldering them a bit harder.

Cut tracks for the logic circuit board

The lower circuit board will host the main logic for the project – it receives the vsync and hsync signals, and uses these to control whether the video signal should be blanked or not, and which shutter on the glasses should be closed and which should be open. It also contains the oscillator that generates the AC voltage that drives the glasses. I arranged the three logic ICs roughly next to eachother according to their layout on the breadboard version of the circuit and cut the stripboard tracks as appropriate.

IC sockets soldered to the logic circuit board Discrete components added to the logic circuit board

I started by adding the sockets for the ICs and pin header to connect this circuit board to the video amplifier one, then added the discrete components. As before, I taped the components down before soldering them in place to make the task easier. Being able to copy the circuit directly from the breadboard version also made the task much easier.

Top view of the wire links on the logic circuit board Bottom view of the wire links on the logic circuit board

The last step for this part of the project was, as before, adding the wire links. Rather than run long wires around ICs I found it more practical to solder a few wires onto the underside of the stripboard.

Pin sockets and wires for the connector cable One end of the connector cable soldered

The two circuit boards needed to be connected together somehow. Without the facilities to make a proper ribbon cable, I just soldered some lengths of stranded wire (rather messily) between two pin sockets. As I'm not outputting anything to vsync or hsync (I'm feeding the input sync signals straight back to the output via the jumpers previously discussed), I didn't need to connect anything to these pins – hence the apparently missing wires in the photos.

Connector cable bent to fit Both circuit boards installed in the enclosure

The cable to connect the two boards together needed to be bent to fit – it's getting snug, but everything's in there without having to be forced, which is a good sign.

3.5mm stereo jack socket for LCD glasses connector Stereo jack socket soldered to the logic circuit board

The next job was to attach the 3.5mm stereo jack that the LCD shutter glasses are plugged into. This is pushed through the hole in the enclosure from the inside and screwed on from the outside, so it can be soldered directly to the circuit board without having to thread it through the hole first. The small red "washer" is a length of enamelled wire that has been bent around the thread of the jack socket and is used as a spacer – without it, quite a lot of the thread protrudes from the front of the box, looking rather untidy.

Control switches with connecting wires All parts installed in the enclosure

Last of all are the two control switches. These are soldered to the track side of the stripboard like the stereo jack, but must be snapped through their holes in the enclosure first, which is why they were left until last. Everything is slotted into place, the base of the enclosure is screwed on, and the project is pretty much complete.

Tightly packed VGA cables

The VGA cables don't fit especially well – the D-subminiature sockets are a bit too close to eachother. If I use a thin VGA extension cable and wiggle the leads I can just about get both to screw in.

LCD shutter glasses showing the left eye view of a row-interleaved image

The demonstration pattern from some previous ramblings of mine is quite useful for testing 3D glasses, and by holding the left eye of the shutter glasses to the screen you can see that only the "L" part of the image is let through.

Adding a stereoscopic renderer to Quake II

Sunday, 7th February 2010

Having tweaked the stereoscopic rendering code in Quake, I decided to have a go at Quake II. This doesn't natively support row-interleaved stereoscopic rendering, but I thought that the shared code base of Quake and Quake II should make extending Quake II relatively simple.

Quake II does have two console variables dedicated to stereoscopic rendering already, cl_stereo (enable/disable stereoscopic rendering) and cl_stereo_separation (controls the displacement of the camera between eyes; the same as LCD_X in Quake). These variables only seem to be used in the OpenGL renderer, though I haven't been able to get them to do anything meaningful – I have a hunch that you need a video card that supports stereoscopic rendering; these do exist, and have a socket on them for 3D glasses, but I'm having to make do with my DIY hardware. Furthermore, I've always found the OpenGL rendering in Quake and Quake II incredibly ugly, with blurry low-resolution textures (this is the reason I opted to emulate the software renderer when writing my own implementation of the Quake engine).

Quake II's tweaked software renderer that now supports row-interleaved stereoscopic 3D.

It turns out that Quake II does indeed render each frame twice with the camera offset when cl_stereo is switched on, but the software renderer doesn't do anything to blend the two views together. Using the same tricks as Quake – halving the height of the viewport, doubling the apparent stride of the render surface, shunting the address of the buffer down one scanline for one eye – seems to have done the trick, though finding out when exactly to carry out these steps hasn't been all that smooth. The particle rendering code still crashes with an access violation if called twice during a frame, but only in release mode. Fortunately, the entire software renderer has been written in C and assembly, so I've reverted to the C-based particle renderer instead of the assembly one for the time being as that doesn't appear to be affected by the same bug.

A slightly more bothersome problem is the use of 8-bit DirectDraw modes for full-screen rendering. Unfortunately, Windows seems to like interfering with the palette resulting in rather hideous colours. Typing vid_restart a few times into the console may eventually fix the issue, but it's far from an ideal solution. An alternative may be to rewrite the code to output 32-bit colour; this would also allow for coloured lighting. Unfortunately, I don't think I'd be especially good at rewriting the reams of x86 assembly required to implement such a fix, and the C software renderer I previously mentioned results in a slightly choppy framerate at high resolutions.

An alternative would be to learn how to use Direct3D from C and rewrite the renderer entirely, taking advantage of hardware acceleration but this would seem like an equally daunting task. If anyone has any suggestions or recommendations I'd be interested to hear them!

Stereo Quake

Replacement binaries for Quake and Quake II can be downloaded from the project page; source code is available on Google Code.

3D glasses, a VGA line-blanker and fixing Quake

Wednesday, 3rd February 2010

Some time ago, I posted about using interlaced video to display 3D images. Whilst the idea works very nicely in theory, it's quite tricky to get modern video cards to generate interlaced video at a variety of resolutions and refresh rates. My card limits me to 1920×1080 at i30 or 1920×1080 at i25, and only lets me use this mode on my LCD when I really need it on a CRT. Even if you can coax the video card to switch to a particular mode, this is quite a fragile state of affairs as full-screen games will switch to a different (and likely progressively scanned) mode.

3D glasses adaptor with line blanker prototype
3D glasses adaptor with line blanker prototype

An alternative is to build an external bit of hardware that simulates an interlaced video mode from a progressive one. The easiest way of doing that is to switch off the RGB signals on alternate scanlines, blanking odd scanlines in one frame and even scanlines in the next. This type of circuit is appropriately named a line blanker, and my current implementation is shown above. It sits between the PC and the monitor, and uses a pair of flip-flops which toggle state on vsync or hsync signals from the PC. The output from the vsync flip-flop is used to control which eye is open and which is shut on the LCD glasses, and is also combined with the hsync flip-flop to switch the RGB signal lines on or off on alternate lines using a THS7375 video amplifier. Unfortunately, this amplifier is only available as TSSOP, which isn't much fun to solder if you don't have the proper equipment; I made a stab at it with a regular iron, the smallest tip I could find, lots of no-clean flux and some solder braid. I have been informed that solder paste makes things considerably easier, so will have to try that next time.

My cheap LCD glasses lack any form of internal circuitry, merely offering two LCD panels wired directly to a 3.5mm stereo jack, and so I'm using the 4030 exclusive-OR gate oscillator circuit to drive them.

The adaptor provides one switch to swap the left and right eyes in case they are reversed, and another is provided to disable the line blanking circuit (useful for genuine interlaced video modes or alternate frame 3D). You can download a schematic of the circuit here as a PDF.

I've been using these glasses to play Quake in 3D, which is good fun but an experience that was sadly marred by a number of bugs and quirks in Quake's 3D mode.

WinQuake, demonstrating the crosshair bug and excessive stereo separation of the player's weapon
WinQuake, demonstrating the crosshair bug and excessive stereo separation of the weapon

The most obvious problems in the above screenshot are the migratory crosshair (appearing 25% of the way down the screen instead of vertically centred) and the excessive stereo separation of the player's weapon.

If the console variable LCD_X is non-zero, Quake halves the viewport height then doubles what it thinks is the stride of the graphics buffer. This causes it to skip every other scanline when rendering. Instead of rendering once, as normal, it translates the camera in one direction, renders, then offsets the start of the graphics buffer by one scanline, translates the camera in the other direction then renders again. This results in the two views (one for each eye) being interleaved into a single image.

The crosshair is added after the 3D view is rendered (in fact, Quake just prints a '+' sign in the middle of the screen using its text routines), which explains its incorrect position – Quake doesn't take the previously halved height of the display into consideration, causing the crosshair to be drawn with a vertical position of half of half the height of the screen. That's pretty easy to fix – if LCD_X is non-zero, multiply all previously halved heights and Y offsets by two before rendering the crosshair to compensate.

WinQuake, demonstrating the DirectDraw corruption bug
WinQuake, demonstrating the DirectDraw corruption bug

A slightly more serious bug is illustrated above. When using the DirectDraw renderer (the default in full-screen mode), the display is corrupted. This can be fixed by passing -dibonly to the engine, but it would be nice to fix it.

After a bit of digging, it appeared that the vid structure, which stores fields such as the address of the graphics buffer and its stride, was being modified between calls to the renderer. It seemed to be reverting to the actual properties of the graphics buffer (i.e. it pointed to the top of the buffer and stored the correct stride of the image, not the doubled one). Further digging identified VID_LockBuffer() as the culprit; this does nothing if you're using the dib rendering mode, but locks the buffer and updates the vid structure in other access modes. Fortunately, you can call this function as many times as you like (as long as you call VID_UnlockBuffer() a corresponding number of times) – it only locks the surface and updates vid the first time you call it. By surrounding the entire 3D rendering routine in a VID_LockBuffer()VID_UnlockBuffer() pair, vid is left well alone, and Quake renders correctly in full-screen once again.

The final issue was the extreme stereo separation of the player weapon, caused by its proximity to the camera – it does make the game quite uncomfortable to play. The game moves the camera and weapon to the player's position, then applies some simple transformations to implement view/weapon bobbing, before rendering anything. Applying the same camera offset and rotation to the player weapon as the camera when generating the two 3D views put the weapon slap bang in the middle of the screen, as it would appear in regular "2D" Quake. This gives it the impression of a carboard cutout, and can put it behind/"inside" walls and floors when you walk up to them; I've added a console variable, LCD_VIEWMODEL_SCALE, that can be used to interpolate between the default 3D WinQuake view (value: 1) and the cardboard cutout view (value: 0).

WinQuake with the 3D fixes applied
WinQuake with the 3D fixes applied

You can download the replacement WinQuake from here – you can just overwrite any existing executable. (You will also need the VC++ 2008 SP1 runtimes, if you do not already have them). Source code is included, and should build in VC++ 2008 SP1 (MASM only appears to be included in SP1, which is required to compile Quake's extensive collection of assembly source files).

If you don't have a copy of Quake, I recorded its looping demos in 3D and uploaded them to YouTube. This was before I made the above fixes, so there's no crosshair or player weapon model in the videos – if you have access to YouTube-compatible 3D glasses or crossable eyes, click here. smile.gif

IM-me wireless terminal

Thursday, 14th January 2010

A recent post on Hack a Day alerted me the to the IM-me, a device designed to be used with a web-based IM service that communicated with the PC via a USB wireless adaptor.

Pink!

According to Hunter Davis, the body of the messages were sent between the PC and the IM-me are in plain text. This sounded like a good start to me, so I picked one up from Amazon UK for £7.49 (they're now available for even less than that). You get a lot of electronics for that price; there's a CC1110F32 microcontroller inside (the chips inside the device and its wireless adaptor are clearly marked – no nameless blob of epoxy that you might have expected from the price) and Dave has poked around the insides of his and has mapped the contact pads exposed via the battery compartment to the debug port on the microcontroller. You could use this debug port to overwrite the stock firmware with your own if the fancy took you. However, I'm more interested in seeing what I can do with the device without writing my own firmware for it.

The wireless adaptor shows up in Windows a simple USB HID, so I installed SnoopyPro and logged a chat session with myself. Fortunately, there is indeed no obfuscation or encryption to the structure of messages. I have worked on a C# library that handles most of the different message types (no group chat yet, only direct contact-to-contact) and written up what I've found here. The C# code can be found here, though it is not especially robust yet.

im-me-chat-log.png

I think that my main problem is a poor grasp of asynchronous I/O. I read data asynchronously, but write synchronously, and don't currently do anything to protect against my code "speaking over" the incoming data. If you output data when the device is half way through sending a packet, it seems to ignore the data you're sending it. In the case of long messages, which are made up of multiple packets sent in rapid succession, they don't appear to ever reach the device. The USB device responds with a single 0 byte after a packet is written to it, which I don't currently wait for. I'm not sure how you can, when mixing asynchronous reading and synchronous writing, so if anyone has any suggestions or links to reading material I'd greatly appreciate it!

I have no intention of going near the existing IM-me web service – being able to use the IM-me as a general-purpose wireless terminal to talk to your own software opens up a wealth of possibilities. You could set it up to notify you of new emails, read RSS feeds, post updates to social networking sites, use it as a home automation console, remote control a media PC... You may wish to paint it black first, though!

Addendum: Whoops, after refactoring some code I broke the checksum generation. It appears that the IM-me ignores the checksum when receiving messages. I have stuck a brief pause between each byte written to the device and a slightly longer one between each packet sent to the device, and I can now send long messages to it.

Ejecting discs from a damaged camcorder with a remote control

Tuesday, 29th December 2009

I hope that those of you who celebrate it had a good Christmas break and will have an excellent new year!

I recently attempted to repair a DVD camcorder that had been dropped; the eject button no longer worked, though the disc could be ejected by connecting to camera to a PC, right-clicking the DVD drive that subsequently appears in Explorer, then selecting Eject.

I started by removing all of the screws around the affected area, but the plastic casing remained strongly held together by some mysterious internal force. I removed more and more screws, but it soon became apparent that the only way to get into the camera would be to force it open – not being my camera, I didn't feel comfortable doing so, as the rest of the camera worked well and I didn't want to damage any fragile internal mechanisms. I couldn't find any dismantling guides online, so gave up on the idea of fixing the button.

Fortunately, I own the same model of camcorder – a Panasonic VDR-D250 – myself. With my interest in infrared remote controls I had previously found information about the Panasonic protocol it uses. The supplied remote control only has a few simple buttons on it (no eject button, sadly), but I reckoned that the camcorder may accept a number of other commands that the stock remote didn't include.

Remote control to eject discs from a Panasonic camcorder

I started by modifying a universal remote control program for the TI-83+ that I had previous written to allow me to send specific commands to the camcorder, then ran through all of the possible command IDs, noting down those that appeared to have some effect. Eventually I had a pretty decent list, albeit one with quite a few gaps in it. Fortunately, I had found the Eject button code, along with codes to switch mode (which is done on the camera by rotating a mode dial), one that powers the camcorder off, another that appears to restart the camera and another one that resets all settings (not so useful, that one).

Having found the eject code, I set about building a dedicated remote control. I picked the ATtiny13 microcontroller as a base, as that's a more than capable microcontroller with its 9.6MHz internal clock, 1KB program memory, 64 bytes SRAM and 3V operation.

Panasonic Eject remote control circuit diagram

I was a bit surprised to see that AVR-GCC supports the ATtiny13, and whilst C may seem overkill for such a project I'll gladly take advantage of anything that makes my life easier. smile.gif

// Requisite header files.
#include <avr/io.h>
#include <util/delay.h>

// Frequency of the IR carrier signal (Hertz).
#define F_IR_CARRIER (37000)

// Timing of the data bits (microseconds).
#define T_DX_MARK   (440)
#define T_D0_SPACE  (440)
#define T_D1_SPACE (1310)

// Timing of the lead-in and lead-out bits (microseconds).
#define T_LEAD_IN_MARK    (3500)
#define T_LEAD_IN_SPACE   (1750)
#define T_LEAD_OUT_MARK    (440)
#define T_LEAD_OUT_SPACE (74000)

// Commands definitions.
#define OEM_DEVICE_1_CODE         (2)
#define OEM_DEVICE_2_CODE        (32)
#define CAMCORDER_DEVICE_ID     (112)
#define CAMCORDER_SUB_DEVICE_ID  (40)
#define CAMCORDER_COMMAND_EJECT   (1)

// Transmits a single unformatted byte.
void panasonic_send_byte(uint8_t value) {
    // Send eight data bits.
    for (uint8_t bit = 0; bit < 8; ++bit, value >>= 1) {
        // Send the mark/burst.
        DDRB |= _BV(1);
        _delay_us(T_DX_MARK);
        // Send the space.
        DDRB &= (uint8_t)~_BV(1);
        _delay_us(T_D0_SPACE);
        // Extend the space if it's a "1" data bit.
        if (value & (uint8_t)1) {
            _delay_us(T_D1_SPACE - T_D0_SPACE);
        }
    }
}

// Transmits a formatted command packet to the IR device.
void panasonic_send_command(uint8_t oem_device_code_1, uint8_t oem_device_code_2, uint8_t device_code, uint8_t sub_device_code, uint8_t command) {
    // Send the lead in.
    DDRB |= _BV(1);
    _delay_us(T_LEAD_IN_MARK);
    DDRB &= (uint8_t)~_BV(1);
    _delay_us(T_LEAD_IN_SPACE);

    // Send the five command bytes.
    panasonic_send_byte(oem_device_code_1);
    panasonic_send_byte(oem_device_code_2);
    panasonic_send_byte(device_code);
    panasonic_send_byte(sub_device_code);
    panasonic_send_byte(command);

    // Send the checksum.
    panasonic_send_byte(device_code ^ sub_device_code ^ command);

    // Send the lead out.
    DDRB |= _BV(1);
    _delay_us(T_LEAD_OUT_MARK);
    DDRB &= (uint8_t)~_BV(1);
    _delay_us(T_LEAD_OUT_SPACE);
}

// Main program entry point.
int main(void) {

    TCCR0A |= _BV(COM0B0) | _BV(WGM01);     // Toggle OC0B when on CTC reload. Use CTC mode.
    TCCR0B |= _BV(CS00);                    // Set clock source to CPU clock/1.
    OCR0A = (F_CPU / F_IR_CARRIER / 2) - 1; // Set the CTC reload value to generate an IR signal at the correct carrier frequency.

    // Send the "eject" command ad infinitum.
    for(;;) {
        panasonic_send_command(OEM_DEVICE_1_CODE, OEM_DEVICE_2_CODE, CAMCORDER_DEVICE_ID, CAMCORDER_SUB_DEVICE_ID, CAMCORDER_COMMAND_EJECT);
    }
}

The code is about as simple as the circuit. IR signals are transmitted as carefully timed bursts of a particular carrier frequency (37kHz in this case). For example, to send a "0" bit 440μS of this 37kHz signal are sent followed by 440μS of silence. To send a "1" bit, 440μS of carrier signal are sent as before, but a 1310μS period of silence follows it.

The AVR's timer is used to generate a ~37kHz carrier signal. The timer is an eight-bit counter that counts up at a user-defined rate (in my case I've chosen to increment the counter by one every CPU clock cycle). I've configured it to invert the output level of pin OC0B and reset every time it hits a particular value. By setting whether this pin is an output or an input the output of a burst of 37kHz IR signal or silence can be selected. Simple delay loops, generated with the helper function _delay_us, are used to time the transmission of data bits.

Insides of the Panasonic ejecting remote control.

The final step was to assemble the circuit on stripboard and install it in a smallish project box. I've put the switch adjacent to the LED for two reasons; to conserve space and to protect it a little from accidentally being pressed by the protruding LED bezel.

Building a single-button remote control is a relatively straightforward affair, so whilst the above code has a very specific purpose it should be easy enough to modify it to control other devices.

Page 10 of 47 16 7 8 9 10 11 12 13 1447

Older postsNewer postsLatest posts RSSSearchBrowse by dateIndexTags