Graphics Programming unclear to me, please help!

Hi,

I’m just getting started in graphics programming using the SDL api
library and also currently reading the online book, programming games for
linux found at:
http://www.overcode.net/~overcode/writing/plg/local/release/plg-second-
printing-update.pdf

I’m currently up to chapter 4 now and I’m a bit unclear about drawing
stuff to the surface. More specifically, I’m not understanding the code
that’s used to draw individual pixels to the screen that’s found on page
79 of that book:

Uint16 CreateHicolorPixel(SDL_PixelFormat * fmt, Uint8 red,
Uint8 green, Uint8 blue)
{
Uint16 value;
/* This series of bit shifts uses the information from the
SDL_Format structure to correctly compose a 16-bit pixel
value from 8-bit red, green, and blue data. */
value = ((red >> fmt->Rloss) << fmt->Rshift) +
((green >> fmt->Gloss) << fmt->Gshift) +
((blue >> fmt->Bloss) << fmt->Bshift);
return value;
}

I understand the code but at the same time I don’t understand the code.
I’ve used C++ long enough to know syntactically what is happening line by
line but I don’t understand what is going on as far as the bigger picture
is concerned. I don’t see why it’s doing bitshift for the different red,
green, and blue values and then adding them together. I’m mainly having
trouble visualizing what it’s really doing and why it works.

Also the main() that follows after this code segment, specifically in the
nested for loops I’m not completely understanding either:

for (x = 0; x < 256; x++)
{
for (y = 0; y < 256; y++)
{
Uint16 pixel_color;
int offset;
pixel_color = CreateHicolorPixel(screen->format, x, 0, y);
offset = (screen->pitch / 2 * y + x);
raw_pixels[offset] = pixel_color;
}
}

I get the looping ranges, it goes line by line until it covers the entire
dimensions of the window but I don’t quite get what’s happening inside
the loop itself. Like what exactly is offset storing, and why do we need
it? I also don’t quite see why the author suggested using the screen
pitch for calculation over the screen width itself.

One could probably get away with not knowing how to do this part(drawing
individual pixels to a screen) but I rather not take that route; I don’t
think I’m doing myself any favors with that kind of mentality. So if
anyone can help clarify these things for me or take a different approach
in explaining it that’ll help me understand I’d really appreciate it.

I also noticed he mentioned some terms in the earlier chapters of the
book that I’m not familar with. More precisely, can anyone tell me what
this business with little indy and big indy is all about? There were some
other terms that I wasn’t familar with but it escapes me at the moment.

Thanks

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20030918/ee280196/attachment.txt

— Vivi Orunitia wrote:

Hi,

Hi.
snips

… More specifically, I’m not
understanding the code
that’s used to draw individual pixels to the screen
that’s found on page
79 of that book:

Uint16 CreateHicolorPixel(SDL_PixelFormat * fmt,
Uint8 red,
Uint8 green, Uint8 blue)
{
Uint16 value;
/* This series of bit shifts uses the information
from the
SDL_Format structure to correctly compose a 16-bit
pixel
value from 8-bit red, green, and blue data. */
value = ((red >> fmt->Rloss) << fmt->Rshift) +
((green >> fmt->Gloss) << fmt->Gshift) +
((blue >> fmt->Bloss) << fmt->Bshift);
return value;
}

The function creates a 16-bit pixel from the RGB
components you input. An example 16-bit version using
5 bits per component (and not using 1 bit) bellow,
which would be for a specific system (aka not platform
independent):

value = ((red >> 3) << 11)
+ ((green >> 3) << 6)
+ ((blue >> 3) << 1);

the first bitshift (>> 3) reduces the 8-bit component
to only 5 bits. The fmt->[RGB]loss is how many bits of
each component, in the supplied format, are ignored.

The next bitshift (<< 11,<< 6, and << 1) shifts the
component to where it’s supposed to be in the pixel
value. The fmt->[RGB]shift is platform independent,
whereas my example bitshifts are not.

For visualization, lets step through the process using
binary numbers, and doing all the related steps
together:

we start with:
red= 10011010
blue= 00101000
green=01000100

or:
red= RRRRrrrr
blue= BBBBbbbb
green=GGGGgggg

(big = more important :))

we then apply the fmt->[RGB]loss bit, lets assume 3
for all of these, like my example did. Now, we’re left
with:

red= 10011
blue= 00101
green=01000

or:
red= RRRRr
blue= BBBBb
green=GGGGg

then, we shift them over by fmt->[RGB]shift. Let’s
assume 11,6, and 1, like my example did. Now we have:

red= 10011 00000000000
blue= 00101 000000
green=01000 0

or:
red= RRRRr00000000000
blue= BBBBb000000
green=GGGGg0

Now let’s add the 0’s in front up to 16 bits to help
even things out for display in text :slight_smile: :

red= 1001100000000000
blue= 0000000101000000
green=0000000000010000

or:
red= RRRRr00000000000
blue= 00000BBBBb000000
green=0000000000GGGGg0
and we add everything to get:

ret= 1001100101010000

or:
ret= RRRRrGGGGgBBBBb0

for (x = 0; x < 256; x++)
{
for (y = 0; y < 256; y++)
{
Uint16 pixel_color;
int offset;
pixel_color =
CreateHicolorPixel(screen->format, x, 0, y);
offset = (screen->pitch / 2 * y + x);
raw_pixels[offset] = pixel_color;
}
}

… what exactly is offset
storing, and why do we need
it?

screen->pitch is the number of bytes between lines of
the surface. He converts this to the number of pixels
per line by assuming the surface is 2 bytes per pixel,
and simply taking how many bytes there are per line
(screen->pitch) and dividing by the # of bytes per
pixel.

We need this because the image is really stored as a
linear array, not a real 2D way, because memory is
linear, not 2D :).

So given the image:
**/
/
*/*
/**\

it’s stored in memory as:
*///*/\

now in this case, screen->pitch would be 4, or if this
is being viewed in 2 byte unicode, 8. To find the
pixel at (x,y), we look in the array slot [y*width +
x].

… can
anyone tell me what
this business with little indy and big indy is all
about?

It SOUNDS like he means little and big edian, one of
many reasons the first function looks so weird.

Simply put, different processor types (and graphic
cards?) store words (4-byte things like int, and
pointers, on 32bit systems) in different byte orders.

Simple example: the number 0xFFBB8844 in physical
memory could look like:

byte[0] = 0xFF
byte[1] = 0xBB
byte[2] = 0x88
byte[3] = 0x44

whereas on another system/piece of hardware, the same
number would be stored as:

byte[0] = 0x44
byte[1] = 0x88
byte[2] = 0xBB
byte[3] = 0xFF

Thus, to make sure the code works right, he uses the
aforementioned shifts. Thus, starting with:
R=0xFF
G=0x88
B=0x44

and converting to 32 bits (not 16) will get stored as:

byte[0]=0xFF
byte[1]=0x88
byte[2]=0x44
byte[3]=0x00

on some systems, and:

byte[0]=0x00
byte[1]=0x44
byte[2]=0x88
byte[3]=0xFF

on others.

Time for me to shutup :p.__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software

#value = ((red >> 3) << 11)

+ ((green >> 3) << 6)

+ ((blue >> 3) << 1);#

#the first bitshift (>> 3) reduces the 8-bit component
#to only 5 bits. The fmt->[RGB]loss is how many bits of
#each component, in the supplied format, are ignored.

Just a comment: a>>3 mathematically means a/8.
Therefore, a color with intensity 255 would have value
255/8 = 31 after this conversion. This would be to
convert from a palette of 256 distinct color intensities,
to one with 32 different intensities. To store 32
different intensities, we need 5 bits:

16+8+4+2+1 = 31.

/Olof

He is using the pitch instead of the width
because in some resolutions the pitch might
be less than the image due to alignment
optimizations.

Anders Folkesson wrote:>

— Vivi Orunitia wrote:
Hi,

Hi to you :slight_smile:

I’m just getting started in graphics programming using the SDL api
library and also currently reading the online book, programming games for
linux found at:
http://www.overcode.net/~overcode/writing/plg/local/release/plg-second-
printing-update.pdf

I’m currently up to chapter 4 now and I’m a bit unclear about drawing
stuff to the surface. More specifically, I’m not understanding the code
that’s used to draw individual pixels to the screen that’s found on page
79 of that book:

I’m gonna give this a shot here, BUT i wont give details, partly for the reason i am too lazy right now too look up the stuff in the SDLdoc, and yyou should do that yourself to learn about the sdl inner workings, so to say :slight_smile:

And please, if i am makeing mistakes here, scream at me!!

If i get this correctly CreateHicolorPixel() creates a 16 bit color from a 24 bit color (3*8bits). And ofcourse then we gonna lose some color in the process…

Uint16 CreateHicolorPixel(SDL_PixelFormat * fmt, Uint8 red,
Uint8 green, Uint8 blue)
{
Uint16 value;
/* This series of bit shifts uses the information from the
SDL_Format structure to correctly compose a 16-bit pixel
value from 8-bit red, green, and blue data. */

value = ((red >> fmt->Rloss) << fmt->Rshift) +
((green >> fmt->Gloss) << fmt->Gshift) +
((blue >> fmt->Bloss) << fmt->Bshift);
return value;
}

so my guess here is (and it is left as an exercise for you, and me too actually) to look this up) that Rloss, Gloss and Bloss are the numbers indicating how much color we will lose when translating the colors, and then we shift the color composants to their respective places, creating a bit pattern which has zeros on all the other positions. The + operations here are just to add the composants together to create the color…hope this makes sense to you…

I understand the code but at the same time I don’t understand the code.
I’ve used C++ long enough to know syntactically what is happening line by
line but I don’t understand what is going on as far as the bigger picture
is concerned. I don’t see why it’s doing bitshift for the different red,
green, and blue values and then adding them together. I’m mainly having
trouble visualizing what it’s really doing and why it works.

Also the main() that follows after this code segment, specifically in the
nested for loops I’m not completely understanding either:

for (x = 0; x < 256; x++)
{
for (y = 0; y < 256; y++)
{
Uint16 pixel_color;
int offset;
pixel_color = CreateHicolorPixel(screen->format, x, 0, y);
offset = (screen->pitch / 2 * y + x);
raw_pixels[offset] = pixel_color;
}
}

I get the looping ranges, it goes line by line until it covers the entire
dimensions of the window but I don’t quite get what’s happening inside
the loop itself. Like what exactly is offset storing, and why do we need
it? I also don’t quite see why the author suggested using the screen
pitch for calculation over the screen width itself.

One could probably get away with not knowing how to do this part(drawing
individual pixels to a screen) but I rather not take that route; I don’t
think I’m doing myself any favors with that kind of mentality. So if
anyone can help clarify these things for me or take a different approach
in explaining it that’ll help me understand I’d really appreciate it.

I also noticed he mentioned some terms in the earlier chapters of the
book that I’m not familar with. More precisely, can anyone tell me what
this business with little indy and big indy is all about?
I think you mean, big endian and little endian. It tells you how numbers are stored in your processor, least significant byte first or most significant byte first…look it up on google and you should get enough hits to keepr you busy for a week :slight_smile: but there is not much to it really

okay…have a nice day :slight_smile:
–Anders Folkesson

There were some
other terms that I wasn’t familar with but it escapes me at the moment.

Thanks


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Get your free web-based e-mail account from http://www.Math.net
Your online tourguide of Mathematics, with books, links, news,
message boards, and much more!


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

He is using the pitch instead of the width
because in some resolutions the pitch might
be less than the image due to alignment
optimizations.

In all cases that I can think of (except for using 8-bit palettized images),
the pitch would be greater than the image width, because multi-byte values
are used to store each pixel. A 16-bit image that’s 256 pixels wide will use
512 bytes per line, a 24-bit image 256 pixels wide would use 768 bytes per
line, and a 32-bit image that’s 256 pixels wide would use 1024 bytes per
line.

snip

One could probably get away with not knowing how to do this part(drawing
individual pixels to a screen) but I rather not take that route; I don’t
think I’m doing myself any favors with that kind of mentality. So if
anyone can help clarify these things for me or take a different approach
in explaining it that’ll help me understand I’d really appreciate it.

I also noticed he mentioned some terms in the earlier chapters of the
book that I’m not familar with. More precisely, can anyone tell me what
this business with little indy and big indy is all about?
I think you mean, big endian and little endian. It tells you how numbers
are stored in your processor, least significant byte first or most
significant byte first…look it up on google and you should get enough
hits to keepr you busy for a week :slight_smile: but there is not much to it really

Basically, Intel (and compatibles) and Alpha processors use what is called
little-endian format. In this format, multi-byte numbers are stored so that
the LSB (least significant byte) is stored first, followed by the rest of the
bytes in ascending order on up to the MSB (most significant byte). For
example, the number 0xFFEE55AA would be stored as:
byte 0 = 0xAA <-- LSB
byte 1 = 0x55
byte 2 = 0xEE
byte 3 = 0xFF <-- MSB

The other way of storing things is what is called big-endian format. As far as
I know, this format is used by pretty much every CPU maker except Intel (and
Intel compatibles), so you WILL have to deal with endian issues if you want
your games to run on both PCs and Macs while still accessing the pixel values
directly. In this format, multi-byte numbers are stored so that the MSB comes
first, followed by the rest of the bytes in descending order down to the LSB.
So, the number 0xFFEE55AA would be stored in memory like this:
byte 0 = 0xFF <-- MSB
byte 1 = 0xEE
byte 2 = 0x55
byte 3 = 0xAA <-- LSB

Now, there is one more format, what some people call PDP-endian, but it’s
weird, beyond the scope of this discussion, and no hardware uses it anymore
AFAIK.

-Sean Ridenour

Now, there is one more format, what some people
call PDP-endian, but it’s
weird, beyond the scope of this discussion, and no
hardware uses it anymore
AFAIK.

-Sean Ridenour

Is that the one where if you have, say, 0x11223344, it
gets stored as:
byte 0 = 0x11
byte 1 = 0x33
byte 2 = 0x22
byte 3 = 0x44

or the like? I believe that had to do mainly with the
16-bit nature of that old hardware…__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software

Oh yeah… it’s called the “nuxi” problem (“unix” spelled in
middle-endian.) Goto http://nuxi.cs.ucdavis.edu/ for a brief history
lesson on it… unfortunately it seems to be down at the moment.

-MarkOn Fri, 19 Sep 2003, Michael Rickert wrote:

Now, there is one more format, what some people
call PDP-endian, but it’s
weird, beyond the scope of this discussion, and no
hardware uses it anymore
AFAIK.

-Sean Ridenour

Is that the one where if you have, say, 0x11223344, it
gets stored as:
byte 0 = 0x11
byte 1 = 0x33
byte 2 = 0x22
byte 3 = 0x44

or the like? I believe that had to do mainly with the
16-bit nature of that old hardware…


Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Mark K. Kim
http://www.cbreak.org/
PGP key available on the website
PGP key fingerprint: 7324 BACA 53AD E504 A76E 5167 6822 94F0 F298 5DCE

Thanks for the respose everyone. That definitely clear a few things up for
me.

I do have another question though a bit off-topic maybe. I’m running into a
strange problem and not sure what the cause could be. I notice on projects
where I use SDL, I can’t seem to use the turbo debugger for source
debugging. When I try to do it all I get is a cpu assembly window when the
debugger starts up. Console applications seem to debug fine though.

Any ideas what could be wrong here that’s preventing me from doing this?
Any solutions, suggestions, and comments on this problem are welcomed.

Thanks