Hi folks,
I posted here a while ago a message in which I asked for some help in
resampling a sound sample in order to increase/decrease the pitch. I
tried it back then and just couldn’t get it done. Then I tried again
right now and - once again - the result is all messed up. This is
totally driving me nuts. Here’s what I have:
// Function expects a 16-bit Stereo Sample (4 Byte/Sample)
Mix_Chunk* CSound::Change_Pitch(Mix_Chunk *Input_Sample, float Factor) {
Mix_Chunk *Sample_Modified;
if (Input_Sample->allocated!=1) {
throw
CGenericException(std::string(“CSound::Change_Pitch”),std::string(“Input
sample not allocated.”));
}
Sample_Modified=(Mix_Chunk*)malloc(sizeof(Mix_Chunk));
Sample_Modified->allocated=1; // Not yet, but will soon be
Sample_Modified->alen=(Uint32)((Input_Sample->alen/4*Factor)*4);
// alen must be divisible by 4
Sample_Modified->abuf=(Uint8*)malloc(Sample_Modified->alen*sizeof(Uint8));
Sample_Modified->volume=Input_Sample->volume;
for (Uint32 i=0;i<Sample_Modified->alen;i++) { // Zero out
destination sample just to be sure
Sample_Modified->abuf[i]=0;
};
Uint32 Resample;
Uint16 Links, Rechts;
for (Uint32 i=0;i<Input_Sample->alen/4;i+=4) {
Links=(Input_Sample->abuf[i+0]*256)+Input_Sample->abuf[i+1];
Rechts=(Input_Sample->abuf[i+2]*256)+Input_Sample->abuf[i+3];
// printf(“L/R: %d %d [%d %d %d
%d]\n”,Links,Rechts,Input_Sample->abuf[i+0],Input_Sample->abuf[i+1],Input_Sample->abuf[i+2],Input_Sample->abuf[i+3]);
Resample=(Uint32)((float)iFactor);
Resample/=4; // To get the right offset
Resample=4;
Sample_Modified->abuf[Resample+0]=Input_Sample->abuf[i+0];
Sample_Modified->abuf[Resample+1]=Input_Sample->abuf[i+1];
Sample_Modified->abuf[Resample+2]=Input_Sample->abuf[i+2];
Sample_Modified->abuf[Resample+3]=Input_Sample->abuf[i+3];
}
// Debug output the modified sample
for (Uint32 i=0;i<Sample_Modified->alen/4;i+=4) {
Links=(Sample_Modified->abuf[i+0]*256)+Sample_Modified->abuf[i+1];
Rechts=(Sample_Modified->abuf[i+2]*256)+Sample_Modified->abuf[i+3];
printf(“L/R: %d %d [%d %d %d
%d]\n”,Links,Rechts,Sample_Modified->abuf[i+0],Sample_Modified->abuf[i+1],Sample_Modified->abuf[i+2],Sample_Modified->abuf[i+3]);
}
return Sample_Modified;
}
It’s working a little bit (currently only supports Factor<1 and
16Bit/Stereo samples). This is what it does: I created a input sample
which has the left channel completely silenced (all 0s). When I activate
any of the printfs, it’s correctly displaying
L/R: 0 something (0 0 someting something)
It seems as if everything was okay. But when I play the sample (say it
was called with 0.9):
Play faster (upsampled) sound on right channel
Play white noise of same length on right channel
Play faster (upsampled) sound on left channel
Play white noise of same length on left channel
How can that be? Left/Right channel data is interleaved, isn’t it? As I
mentioned, it’s totally driving me crazy, things like that make game
programming so little fun :-(((
I hope somebody can point out what I did wrong.
Greetings
Joe