[…]
I have used SDL_GetTicks() and SDL_GetPerformanceCounter() to measure (for
example) 100ms and play the sound. But the problem is that playback is not
consistent, even few ‘ms’ of time difference can be heard when sounds are
played rapidly.
[…]
I suspect few actually understand the problem, and most just
(incorrectly) assume it’s in the nature of things and work around it.
I don’t actually know of any games, other than my own, that aren’t
“hacking” around the problem by using looped samples for machine guns,
engines and the like.
Anyway, what happens in most sound engines is, the API calls either
queue up messages, or locks the engine and changes its state directly.
The next time the audio callback runs, which is typically every 20 ms
or so, new sound effects are started, right at the start of the audio
buffer. So, all your commands are effectively quantized to whatever
buffer period audio output is configured to!
A low latency musical application typically runs audio processing an
order of magnitude more frequently (less than 1 ms is common), which
reduces this issue to acceptable levels for most applications - but
unfortunately, you can’t rely on that “solution” (it’s still a
hack…) unless you’re on a machine that’s equipped and configured for
serious music production.
What you can do is timestamp the messages from the API, and delay them
as needed in the mixer to maintain constant latency. That takes the
buffer setting out of the equation; more buffering only increases
latency - not quantization.
However, that still leaves your game video frame rate dependent. If
you just check the current time in the mixer API calls, timing still
gets quantized, only now, it’s to the rendering frame rate. If you’re
running a fixed logic frame rate with interpolation/“tweening,” that’s
still not good enough!
The next step is to add explicit timestamping support in the API.
(That’s what I’m doing in both generations of Audiality, used in Kobo
Deluxe and Kobo II respectively.) Instead of using actual timestamps
for commands, you derive audio command timestamps from game logic
time.
If you tune this carefully enough, you should theoretically get away
with triggering sound effects with millisecond accurate timing, or
even better - but realistically, it’s never going to be that accurate
on any normal operating system. So, I’m going to admit that I’m
cheating a little in Kobo II: That 100 RPS minigun is indeed all
realtime synthesis and scripted bullet by bullet - but that script
runs in the audio engine context, and just takes start/stop commands
from the game logic scripts. 
TL:DR: This issue can of course be fixed, but it’s not trivial. You’d
have to hack SDL_mixer (not sure how deeply) and change its API
slightly.On Thu, Dec 5, 2013 at 8:10 PM, JurisL85 wrote:
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
‘---------------------------------------------------------------------’