Glenn, I did read your responses RE: Audio consumed

Glenn, I did read your responses RE: Audio consumed.

I was guessing that picking a small enough audio buffer might allow me to keep within some small tolerance of where I want to be time-wise.

Like you, I am disappointed that this Cross-Platform API seems to fall short in such a KEY area of sound and sync.

Anyone else have the same consernation ? Workarounds ?

i have an idea (might not be too pretty though)…you could do something
like cut graphics and audio into 3 second chunks…then…when BOTH sound
and graphics have finished…it starts the next 3 second section. You could
make it shorter than 3 seconds so that it would make sure and resync. more
often.

Kinda an ugly hack i know but it might work.> ----- Original Message -----

From: markw@on2.com (Mark Whittemore)
To:
Sent: Tuesday, December 10, 2002 12:22 PM
Subject: [SDL] Glenn, I did read your responses RE: Audio consumed.

Glenn, I did read your responses RE: Audio consumed.

I was guessing that picking a small enough audio buffer might allow me to
keep within some small tolerance of where I want to be time-wise.

Like you, I am disappointed that this Cross-Platform API seems to fall short
in such a KEY area of sound and sync.

Anyone else have the same consernation ? Workarounds ?


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Not only can’t SDL do it; PortAudio and OpenAL can’t do it. (PortAudio
has calls for sync, but they’re not precise. OpenAL has nothing
whatsoever.) Fmod, a proprietary cross-platform audio library, can’t do
it with streams, either. Yet it’s easy to do with DirectSound and it
seems fairly fundamental. Baffling.

I’ve fallen back on writing nonportable sound code for individual archs,
having given up on portable libraries in this area …On Tue, Dec 10, 2002 at 03:22:11PM -0500, Mark Whittemore wrote:

Like you, I am disappointed that this Cross-Platform API seems to fall short in such a KEY area of sound and sync.


Glenn Maynard

Like you, I am disappointed that this Cross-Platform API seems to fall short in such a KEY area of sound and sync.

Not only can’t SDL do it; PortAudio and OpenAL can’t do it. (PortAudio
has calls for sync, but they’re not precise. OpenAL has nothing
whatsoever.) Fmod, a proprietary cross-platform audio library, can’t do
it with streams, either. Yet it’s easy to do with DirectSound and it
seems fairly fundamental. Baffling.

I’ve fallen back on writing nonportable sound code for individual archs,
having given up on portable libraries in this area …

I seem to have missed the problem statement. What is it you are trying
to sync to what? I spent part of a weekend last spring fiddling with the
low level SDL sound code and found it was easy to get much less that
10th of a second start and stop times, millisecond start and stop
latency is closer to what I saw. Just a matter of setting the buffer
size so that it holds roughly 1 millisecond of samples. The hardware HAS
to be fed on a regular basis so after start up the sound stream call
backs happen very regularly. Plus, you need to keep the stream running
even when you aren’t playing anything. Starting the sound pipeline has
some built in overhead. But, it is pretty easy to send 0s, or white/pink
noise when there are no sound samples pending.

The only thing I can think of is that SDL is filling the whole buffer on
the card so you might be seeing a delay based on the sound card buffer
size rather than on your buffer size.

So, what am I missing?

	Bob PendletonOn Tue, 2002-12-10 at 15:02, Glenn Maynard wrote:

On Tue, Dec 10, 2002 at 03:22:11PM -0500, Mark Whittemore wrote:


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-----------------------------------+

I seem to have missed the problem statement. What is it you are trying
to sync to what? I spent part of a weekend last spring fiddling with the

StepMania, a DDR-ish game. Arrows scroll timed with music; the
scrolling is entirely controlled by the current music position
(the background music becomes the game’s time base), so we need
to be able to query the current sound position accurately or we
get jerky scrolling and poor game timing (for scoring).

low level SDL sound code and found it was easy to get much less that
10th of a second start and stop times, millisecond start and stop
latency is closer to what I saw. Just a matter of setting the buffer
size so that it holds roughly 1 millisecond of samples. The hardware HAS
to be fed on a regular basis so after start up the sound stream call

The buffer size must be at least 1024 samples at 44khz (23ms) on my system
or I get choppy sound. I’ve heard that OS/X can handle much smaller buffers
(which would be possible with more finely-grained scheduling), but Windows
generally can’t. 1ms is certainly completely impossible in Windows, since
the sound thread won’t get CPU cycles every millisecond.

backs happen very regularly. Plus, you need to keep the stream running

They happen regularly, but not precisely. (see below)

even when you aren’t playing anything. Starting the sound pipeline has
some built in overhead. But, it is pretty easy to send 0s, or white/pink
noise when there are no sound samples pending.

I keep the sound playing at all times, and just feed in data when sounds
start. (Actually, with my DSound implementation that uses hardware
mixing, I do start streams right when a sample starts; this allows
sounds to have nearly zero startup latency. But that’s a different
topic.)

The only thing I can think of is that SDL is filling the whole buffer on
the card so you might be seeing a delay based on the sound card buffer
size rather than on your buffer size.

With the DSound implementation, the buffer on the card is equal to 2 *
the buffer size you specify. (Simple double buffering.)

The problem is that we can’t assume that sound will be played at a
constant time (the buffer size) after the callback is called. For
example, take a look at the DirectSound implementation:

while ( cursor == playing ) {
	/* FIXME: find out how much time is left and sleep that long */
	SDL_Delay(10);

This means the callback might be called up to 10ms late (meaning the
delay until the sound is played is 10ms less), plus any other delays.
(Even without this, using the blocking method of the notify code,
scheduling latencies introduce unpredictable delays–fixing this FIXME
won’t fix the problem).

That is, it’s reasonable to say that, with a 23 ms buffer, that “the
sound you return from your sound callback will be heard approximately 23
ms later”, but it’s wrong to say “… exactly 23 ms later”; it might be
heard as little as 13ms later (or less, depending on the mood of the
system scheduler and other factors).

(It’s probably correct to say “at most 23 ms later”, at least with the
DSound implementation, but that doesn’t help.)

My code looked something like this. This is conceptual, of course, and
looks little like the actual code:

void callback(…)
{
current_play_sample = next_fed_sample - samples_of_buffer_latency;
current_play_time = GetTime();
}

int get_current_time()
{
float time_since_callback = GetTime() - current_play_time;
int samples_since_callback = time_since_callback * seconds_per_sample;
return current_play_sample + samples_since_callback;
}

It returned time that was reasonable; it didn’t drift at all, even
through buffer underruns, but the times were wrong because the callback
wasn’t called at the exact time that “samples_of_buffer_latency” would
have been correct (due to, for example, that SDL_Delay call).

(And we can’t time based on the time since we started playing, since
buffer underruns throw that off, and it’s liable to drift since it has
nothing to tie it to the actual output.)On Tue, Dec 10, 2002 at 08:56:51PM -0600, Bob Pendleton wrote:


Glenn Maynard

i didnt really get any responses outa this one but i think its a good
idea…are there any reasons why its unfavorable?> ----- Original Message -----

From: @atrix2 (atrix2)
To:
Sent: Tuesday, December 10, 2002 12:37 PM
Subject: Re: [SDL] Glenn, I did read your responses RE: Audio consumed.

i have an idea (might not be too pretty though)…you could do something
like cut graphics and audio into 3 second chunks…then…when BOTH sound
and graphics have finished…it starts the next 3 second section. You
could
make it shorter than 3 seconds so that it would make sure and resync. more
often.

Kinda an ugly hack i know but it might work.

----- Original Message -----
From: “Mark Whittemore”
To:
Sent: Tuesday, December 10, 2002 12:22 PM
Subject: [SDL] Glenn, I did read your responses RE: Audio consumed.

Glenn, I did read your responses RE: Audio consumed.

I was guessing that picking a small enough audio buffer might allow me to
keep within some small tolerance of where I want to be time-wise.

Like you, I am disappointed that this Cross-Platform API seems to fall
short
in such a KEY area of sound and sync.

Anyone else have the same consernation ? Workarounds ?


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I don’t see how it could provide any better sync than just timing against the
callback.On Tue, Dec 10, 2002 at 09:48:18PM -0800, Atrix Wolfe wrote:

i didnt really get any responses outa this one but i think its a good
idea…are there any reasons why its unfavorable?


Glenn Maynard

Ok, now I understand the problem. In an earlier posting I thought you
said you needed 10th of a second (100 millisecond) accuracy, in this
last posting it seems like you are saying you need more like 10
millisecond accuracy. Is that correct? 'Cause otherwise, your technique
based on counting buffers that have been loaded gives you a clock that
is always accurate to ~46 milliseconds and seems like it would be
accurate to ±11.5 milliseconds. Even though the call backs do no occur
at precise intervals, they are directly tied to the sound hardware which
IS precisely timed. So an imprecise call back can tell you about a
precise event.

As you point out, in a simple double buffered system, when you load the
second buffer, you know the first buffer you loaded is playing so the
music is playing in the range of 0 to 23 milliseconds. When you load the
third buffer, the first buffer finished and the music is playing in the
range of 23 to 46 milliseconds, and so on.

It seems to me that if you generate video frames showing the markers at
the mid point of the 23 millisecond buffer time the pointers are
accurate ± ~11.5 milliseconds, which is smaller than the frame time at
60 frames/second. In a double buffered set up you will have to update
the back buffer in advance of the music to make sure it shows up on the
screen in synch with the music.

How good does your synchronization between the sound and the screen have
to be? It can’t be better than the frame time of the video display.On Tue, 2002-12-10 at 22:49, Glenn Maynard wrote:

On Tue, Dec 10, 2002 at 08:56:51PM -0600, Bob Pendleton wrote:

I seem to have missed the problem statement. What is it you are trying
to sync to what? I spent part of a weekend last spring fiddling with the

StepMania, a DDR-ish game. Arrows scroll timed with music; the
scrolling is entirely controlled by the current music position
(the background music becomes the game’s time base), so we need
to be able to query the current sound position accurately or we
get jerky scrolling and poor game timing (for scoring).

low level SDL sound code and found it was easy to get much less that
10th of a second start and stop times, millisecond start and stop
latency is closer to what I saw. Just a matter of setting the buffer
size so that it holds roughly 1 millisecond of samples. The hardware HAS
to be fed on a regular basis so after start up the sound stream call

The buffer size must be at least 1024 samples at 44khz (23ms) on my system
or I get choppy sound. I’ve heard that OS/X can handle much smaller buffers
(which would be possible with more finely-grained scheduling), but Windows
generally can’t. 1ms is certainly completely impossible in Windows, since
the sound thread won’t get CPU cycles every millisecond.

backs happen very regularly. Plus, you need to keep the stream running

They happen regularly, but not precisely. (see below)

even when you aren’t playing anything. Starting the sound pipeline has
some built in overhead. But, it is pretty easy to send 0s, or white/pink
noise when there are no sound samples pending.

I keep the sound playing at all times, and just feed in data when sounds
start. (Actually, with my DSound implementation that uses hardware
mixing, I do start streams right when a sample starts; this allows
sounds to have nearly zero startup latency. But that’s a different
topic.)

The only thing I can think of is that SDL is filling the whole buffer on
the card so you might be seeing a delay based on the sound card buffer
size rather than on your buffer size.

With the DSound implementation, the buffer on the card is equal to 2 *
the buffer size you specify. (Simple double buffering.)

The problem is that we can’t assume that sound will be played at a
constant time (the buffer size) after the callback is called. For
example, take a look at the DirectSound implementation:

while ( cursor == playing ) {
/* FIXME: find out how much time is left and sleep that long */
SDL_Delay(10);

This means the callback might be called up to 10ms late (meaning the
delay until the sound is played is 10ms less), plus any other delays.
(Even without this, using the blocking method of the notify code,
scheduling latencies introduce unpredictable delays–fixing this FIXME
won’t fix the problem).

That is, it’s reasonable to say that, with a 23 ms buffer, that “the
sound you return from your sound callback will be heard approximately 23
ms later”, but it’s wrong to say “… exactly 23 ms later”; it might be
heard as little as 13ms later (or less, depending on the mood of the
system scheduler and other factors).

(It’s probably correct to say “at most 23 ms later”, at least with the
DSound implementation, but that doesn’t help.)

My code looked something like this. This is conceptual, of course, and
looks little like the actual code:

void callback(…)
{
current_play_sample = next_fed_sample - samples_of_buffer_latency;
current_play_time = GetTime();
}

int get_current_time()
{
float time_since_callback = GetTime() - current_play_time;
int samples_since_callback = time_since_callback * seconds_per_sample;
return current_play_sample + samples_since_callback;
}

It returned time that was reasonable; it didn’t drift at all, even
through buffer underruns, but the times were wrong because the callback
wasn’t called at the exact time that “samples_of_buffer_latency” would
have been correct (due to, for example, that SDL_Delay call).

(And we can’t time based on the time since we started playing, since
buffer underruns throw that off, and it’s liable to drift since it has
nothing to tie it to the actual output.)


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-----------------------------------+

For actual display, it’s not so much that it needs to be very close, as such;
it’s that there has to be little jitter. At high scroll speeds, the
display can be scrolling at a rate of one screen height in half a second
(or more). This means that an error of ±15 can be visible by the
scrolling not appearing smooth:

t=1 X
t=2 X
t=3 X
t=4 X

vs

t=1 X
t=2 X
t=3 X
t=4 X

t=2 here might have an error of -10ms; t=4 might have an error of
+10ms. (Of course, the differences would be subtle–a few pixels up or
down–but the result is the same, though subtler.)

Also a problem is scoring. Scoring is based on the music; in some
modes, timing is very strict, and having the timing have an extra error
of ±10ms (in addition to other factors) is bad.

Querying the play cursor seems to give an accuracy down to about 1.5ms
with DirectSound (comparing the results with the system clock and
ignoring drift.)

Also, there doesn’t seem to be any guarantee that SDL will be
double-buffering, or about the actual timing of the callback call.
(For example, David Olofson’s recent post; also, there’s no telling how
many chunks of the given buffer size will exist; generally 2, but that’s
not guaranteed anywhere.)On Thu, Dec 12, 2002 at 11:21:03AM -0600, Bob Pendleton wrote:

Ok, now I understand the problem. In an earlier posting I thought you
said you needed 10th of a second (100 millisecond) accuracy, in this
last posting it seems like you are saying you need more like 10
millisecond accuracy. Is that correct? 'Cause otherwise, your technique
based on counting buffers that have been loaded gives you a clock that
is always accurate to ~46 milliseconds and seems like it would be
accurate to ±11.5 milliseconds. Even though the call backs do no occur
at precise intervals, they are directly tied to the sound hardware which
IS precisely timed. So an imprecise call back can tell you about a
precise event.

As you point out, in a simple double buffered system, when you load the
second buffer, you know the first buffer you loaded is playing so the
music is playing in the range of 0 to 23 milliseconds. When you load the
third buffer, the first buffer finished and the music is playing in the
range of 23 to 46 milliseconds, and so on.

It seems to me that if you generate video frames showing the markers at
the mid point of the 23 millisecond buffer time the pointers are
accurate ± ~11.5 milliseconds, which is smaller than the frame time at
60 frames/second. In a double buffered set up you will have to update
the back buffer in advance of the music to make sure it shows up on the
screen in synch with the music.

How good does your synchronization between the sound and the screen have
to be? It can’t be better than the frame time of the video display.


Glenn Maynard

It still seems like you are trying to use the wrong thing as your time
base. The sound hardware has very precise timing, even though you can
see it. And, the video timing is also fixed and precise even though you
may not be able to see it. But, the system clock is both precise and
visible. As long as you can get a good estimate of when the music
started playing you can use the current time to generate frames and to
tell when something happens in the music. At least is seems that way to
me.

	Bob Pendleton

P.S.

Can people actually do anything so precisely that 1.5 milliseconds
matters?On Thu, 2002-12-12 at 11:53, Glenn Maynard wrote:

On Thu, Dec 12, 2002 at 11:21:03AM -0600, Bob Pendleton wrote:

Ok, now I understand the problem. In an earlier posting I thought you
said you needed 10th of a second (100 millisecond) accuracy, in this
last posting it seems like you are saying you need more like 10
millisecond accuracy. Is that correct? 'Cause otherwise, your technique
based on counting buffers that have been loaded gives you a clock that
is always accurate to ~46 milliseconds and seems like it would be
accurate to ±11.5 milliseconds. Even though the call backs do no occur
at precise intervals, they are directly tied to the sound hardware which
IS precisely timed. So an imprecise call back can tell you about a
precise event.

As you point out, in a simple double buffered system, when you load the
second buffer, you know the first buffer you loaded is playing so the
music is playing in the range of 0 to 23 milliseconds. When you load the
third buffer, the first buffer finished and the music is playing in the
range of 23 to 46 milliseconds, and so on.

It seems to me that if you generate video frames showing the markers at
the mid point of the 23 millisecond buffer time the pointers are
accurate ± ~11.5 milliseconds, which is smaller than the frame time at
60 frames/second. In a double buffered set up you will have to update
the back buffer in advance of the music to make sure it shows up on the
screen in synch with the music.

How good does your synchronization between the sound and the screen have
to be? It can’t be better than the frame time of the video display.

For actual display, it’s not so much that it needs to be very close, as such;
it’s that there has to be little jitter. At high scroll speeds, the
display can be scrolling at a rate of one screen height in half a second
(or more). This means that an error of ±15 can be visible by the
scrolling not appearing smooth:

t=1 X
t=2 X
t=3 X
t=4 X

vs

t=1 X
t=2 X
t=3 X
t=4 X


±-----------------------------------+

It still seems like you are trying to use the wrong thing as your time
base. The sound hardware has very precise timing, even though you can
see it. And, the video timing is also fixed and precise even though you
may not be able to see it. But, the system clock is both precise and
visible. As long as you can get a good estimate of when the music

It can give a reasonable estimate, but it’s not nearly as precise as
querying the hardware.

Asking the hardware where the play cursor is is a very basic operation–any
hardware that does DMA streaming of sounds (everything) can do it, so I
don’t see any reason it shouldn’t be exposed by audio APIs. It seems
like you’re saying “the portable libraries can’t do it, so you’re doing
something wrong”. I say that this is the most robust and accurate method,
and that the portable libraries are at fault; the system libraries expose
it, and so should the libraries that layer on top of them. (I was fairly
disappointed in open source when I discovered that neither STL, PortAudio
nor OpenAL did this.)

(It might be a bit harder to do this when outputting via /dev/dsp; but
on one hand, that’s completely obsolete; and on another, it could–and
might, I don’t know–expose the number of bytes played via an ioctl.)

started playing you can use the current time to generate frames and to
tell when something happens in the music. At least is seems that way to
me.

We don’t need to know when something happens; we need to know the
current position, since we’re scrolling arrows with the music. We’re
not timing against specific events.

Can people actually do anything so precisely that 1.5 milliseconds
matters?

(Do you mean 10ms? That’s the difference, here.) No; it’s just another
accumulation of error, between the sound being off, system scheduling,
not handling input until the next frame, and so on.On Thu, Dec 12, 2002 at 01:01:33PM -0600, Bob Pendleton wrote:


Glenn Maynard

it sounds like you know alot about this…why not add it to sdl?> ----- Original Message -----

From: g_sdl@zewt.org (Glenn Maynard)
To: “SDL Mailing List”
Sent: Thursday, December 12, 2002 12:43 PM
Subject: Re: [SDL] Glenn, I did read your responses RE: Audio consumed.

On Thu, Dec 12, 2002 at 01:01:33PM -0600, Bob Pendleton wrote:

It still seems like you are trying to use the wrong thing as your time
base. The sound hardware has very precise timing, even though you can
see it. And, the video timing is also fixed and precise even though you
may not be able to see it. But, the system clock is both precise and
visible. As long as you can get a good estimate of when the music

It can give a reasonable estimate, but it’s not nearly as precise as
querying the hardware.

Asking the hardware where the play cursor is is a very basic
operation–any
hardware that does DMA streaming of sounds (everything) can do it, so I
don’t see any reason it shouldn’t be exposed by audio APIs. It seems
like you’re saying “the portable libraries can’t do it, so you’re doing
something wrong”. I say that this is the most robust and accurate method,
and that the portable libraries are at fault; the system libraries expose
it, and so should the libraries that layer on top of them. (I was fairly
disappointed in open source when I discovered that neither STL, PortAudio
nor OpenAL did this.)

(It might be a bit harder to do this when outputting via /dev/dsp; but
on one hand, that’s completely obsolete; and on another, it could–and
might, I don’t know–expose the number of bytes played via an ioctl.)

started playing you can use the current time to generate frames and to
tell when something happens in the music. At least is seems that way to
me.

We don’t need to know when something happens; we need to know the
current position, since we’re scrolling arrows with the music. We’re
not timing against specific events.

Can people actually do anything so precisely that 1.5 milliseconds
matters?

(Do you mean 10ms? That’s the difference, here.) No; it’s just another
accumulation of error, between the sound being off, system scheduling,
not handling input until the next frame, and so on.


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

It still seems like you are trying to use the wrong thing as your time
base. The sound hardware has very precise timing, even though you can
see it. And, the video timing is also fixed and precise even though you
may not be able to see it. But, the system clock is both precise and
visible. As long as you can get a good estimate of when the music

It can give a reasonable estimate, but it’s not nearly as precise as
querying the hardware.

Asking the hardware where the play cursor is is a very basic operation–any
hardware that does DMA streaming of sounds (everything) can do it, so I
don’t see any reason it shouldn’t be exposed by audio APIs. It seems
like you’re saying “the portable libraries can’t do it, so you’re doing
something wrong”.

Sorry you got that impression. It is not what I was saying. What I am
say is more like, “I don’t understand why this is important but I see
that it could be important.” So I ask questions including detailed
questions to find out why it is important and why my preconceptions on
the subject are wrong. I often learn more by understanding why something
is wrong than by being told what is right. And, there is always the
chance that the other guy can’t justify what he is saying and then I
also learn something. I’m very learning oriented :slight_smile:

I say that this is the most robust and accurate method,
and that the portable libraries are at fault; the system libraries expose
it, and so should the libraries that layer on top of them. (I was fairly
disappointed in open source when I discovered that neither STL, PortAudio
nor OpenAL did this.)

There is no doubt that the hardware is the best source of information
for hardware based operations. All I can say is that I don’t think that
anyone working on those projects thought of the problem you are facing.
I’ve been working with games and multimedia since the Apple II and this
is the first time I’ve seen a need for this capability. Low latency
sound playing, yes. You want to hear that fire ball at the same time you
see it. But detailed information about where the stream is at any given
time? No, this is the first time I’ve seen anyone claim a need for that.

(It might be a bit harder to do this when outputting via /dev/dsp; but
on one hand, that’s completely obsolete; and on another, it could–and
might, I don’t know–expose the number of bytes played via an ioctl.)

started playing you can use the current time to generate frames and to
tell when something happens in the music. At least is seems that way to
me.

We don’t need to know when something happens; we need to know the
current position, since we’re scrolling arrows with the music. We’re
not timing against specific events.

Can people actually do anything so precisely that 1.5 milliseconds
matters?

(Do you mean 10ms? That’s the difference, here.) No; it’s just another
accumulation of error, between the sound being off, system scheduling,
not handling input until the next frame, and so on.

It seems to me that you need to know when the music started and how long
it has been playing since then. The point I was trying to make is that
the sound hardware has the same timing accuracy as the system clock. You
just can’t access through SDL. So, what you really need is to know when
that first sample actually hits the DAC. If you have that, any accurate
clock will give you the time base you need. Is that correct?

	Bob PendletonOn Thu, 2002-12-12 at 14:43, Glenn Maynard wrote:

On Thu, Dec 12, 2002 at 01:01:33PM -0600, Bob Pendleton wrote:


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-----------------------------------+

Sorry you got that impression. It is not what I was saying. What I am
say is more like, “I don’t understand why this is important but I see
that it could be important.” So I ask questions including detailed
questions to find out why it is important and why my preconceptions on
the subject are wrong. I often learn more by understanding why something
is wrong than by being told what is right. And, there is always the
chance that the other guy can’t justify what he is saying and then I
also learn something. I’m very learning oriented :slight_smile:

Sure; I’ve just got that impression from a few people discussing this
problem. :slight_smile:

see it. But detailed information about where the stream is at any given
time? No, this is the first time I’ve seen anyone claim a need for that.

This is a beat/rhythm game, so it does have stronger sync needs than
most other games.

It seems to me that you need to know when the music started and how long
it has been playing since then. The point I was trying to make is that
the sound hardware has the same timing accuracy as the system clock. You
just can’t access through SDL. So, what you really need is to know when
that first sample actually hits the DAC. If you have that, any accurate
clock will give you the time base you need. Is that correct?

Not quite. The sound thread might underrun (problem reading a CD, scheduler
in a bad mood, whatever). We need to make sure we don’t lose sync with the
audio when that happens.

There are two ways to do that; the time base of the game can follow the
music, or the music can follow the time base. Some games do the latter;
for example, if the audio skips 35 ms, the audio thread jumps forward 35
ms. That’s a bit tricky, and I havn’t thought about implementing it
that way. (It would be a complete overhaul of the sound system, and the
game is already written to use the former method.) This is commonly
used on arcade and console versions of these games, though, I believe.On Thu, Dec 12, 2002 at 06:48:01PM -0600, Bob Pendleton wrote:


Glenn Maynard