i have a question concerning page 80 of john hall’s “programming linux
games”.
he says that we should use the “pitch” element, not the “w” element, to
calculate offsets into the pixels buffer.
the reason he gives is that if we request a video mode that our video
card doesn’t support, SDL tries to emulate what we want by using a
higher resolution. “w” is the horizontal width of our emulated surface,
while “pitch” is the actual width of the framebuffer.
but then he uses the following code to draw a gradient:
…
screen = SDL_SetVideoMode(256, 256, 16, 0);
…
for (int x=0; x<256; ++x)
{
for (int y=0; y<256; ++y)
{
Uint16 pixel_color;
int offset;
pixel_color = CreateHiColorPixel(screen->format, y, 0, x);
offset = screen->pitch/2 * y + x;
raw_pixels[offset] = pixel_color;
}
}
what is that “/2” doing in there? if pitch is really the width of the
framebuffer, shouldn’t the offset be:
offset = screen->pitch * y + x;
the program seems to work, so i’m obviously not understanding
something…
thanks!
pete–
“Nobody steals our chicks. And lives.” – Duke Nukem (played on Linux)
GPG Instructions: http://www.dirac.org/linux/gpg
GPG Fingerprint: B9F1 6CF3 47C4 7CD8 D33E 70A9 A3B9 1945 67EA 951D