Question about light guns

Hello,

I have seen in the mailing list of SDL that a question concerning light guns had been posted. I’m currently considering developping a device based on the light pen principle, but apart from the basic information I could gather concerning a light gun’s principle, I haven’t been really able to understand how it is synchronized with the CRT monitor to determine the position aimed.

I have never worked on this kind of problems before, but basically I’d like to regularly blank the screen (start the blank at a given time) and detect when the photodiode responds to light. Thus, knowing the frame rate I could know the % of the screen that has been drawn. But once this is done, what is the method to get from these informations the actual (x,y) position on the screen ?

Could the SDL library give me access to information that could help me solve this problem ?

Thanks for your help---------------------------------
Do You Yahoo!? – Une adresse @yahoo.fr gratuite et en fran?ais !
Testez le nouveau Yahoo! Mail

Hello,

I have seen in the mailing list of SDL that a question concerning
light guns had been posted. I’m currently considering developping a
device based on the light pen principle, but apart from the basic
information I could gather concerning a light gun’s principle, I
haven’t been really able to understand how it is synchronized with
the CRT monitor to determine the position aimed.

Traditionally, this used to be handled by a counter that was latched
into some register when a pulse was received on a digital input
connected to the light pen. You’ll find this feature in most video
chipsets from the 8 and 16 bit eras (including consoles, for light
guns, which use the same principle), but I don’t think you’ll find it
on your average PCI or AGP video card.

I have never worked on this kind of problems before, but basically
I’d like to regularly blank the screen (start the blank at a given
time)

Well, the CRTC of the video chipset does that for you… (You can’t
mess with the blanking anyway, as that would just confuse the
monitor. Timings have to be stable.)

and detect when the photodiode responds to light. Thus,
knowing the frame rate I could know the % of the screen that has
been drawn. But once this is done, what is the method to get from
these informations the actual (x,y) position on the screen ?

The transformation is the easy part. It’s basically just like
converting a linear frame buffer address into coordinates;

x = pos % pitch;
y = pos / pitch;

You’ll have to compensate for the vertical and horizontal blanking and
retrace timing, of course. You can probably derive the information
required from the video timings used to configure the display. (That
is, “modelines” or whatever your platform uses.)

Could the SDL library give me access to information that could help
me solve this problem ?

No. Nor can the underlying drivers on most platforms. Unless you have
an RTOS and the light pen generates an IRQ, you need hardware support
for this. Trying to extract video timing and match that with input
events in software on a normal OS is futile, I think. You’d be lucky
to get a vertical accuracy of +/- some dozen lines, and no horizontal
reading.

As to hardware, you don’t have to have it integrated in the video
chipset, although that would be handier and more accurate. (You can
drive a counter from the pixel clock, and you know exactly where/when
video and blanking starts and ends.)

The alternative is to hook some logic up to the pen and the video
cable. Derive the video timing from the sync pulses, and output
position based on that and the timing of the pulses from the light
pen. If you can find light pens and other similar devices for use
with standard PC hardware (ie anything that doesn’t come with a
special video card with a light pen input), I’d guess this is how
they work.

Problem is that you can’t be totally sure where the actual image is by
just looking at the sync signals, so the logic would have to be
calibrated manually, adjusted based on video timing data from the
driver, or the logic has to be “smart” and look at the RGB signals,
assuming the border is black and the image isn’t. (That’s what
monitors with “quick auto adjust” do. Try it with a black screen. :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 20 November 2003 15.26, Sandrine Voros wrote: