Yesterday I was peacefully blitting parts from a nice big 50000x512 image into
my video card’s memory, when I noticed it would stop blitting when I got to
coords outside the range [-32767,32766].
Now what would cause that? Who describes memory offsets with 16-bit integers
these days??? In fact with memory sizes as big as they are these days, 64-bit
integers are becoming a necessity, what are 16-bit integers still doing
around?? Unbelievable.
I don’t want to rant, but this is ridiculous. Can someone
rebuild my confidence and tell me why this particular
design decision was made?
No comment on why SDL in particular made this design decision. However, in
my own proprietary library I decided to limit bitmaps to 16384x16384. This
is because calculating the size in bytes of a 32 bit 32768x32768 bitmap can
cause integer overflow, and because any number that large is more likely the
result of a bug than an actual dimension for a bitmap. Better safe than
sorry.
I don’t want to rant, but this is ridiculous. Can someone
rebuild my confidence and tell me why this particular
design decision was made?
No comment on why SDL in particular made this design decision. However, in
my own proprietary library I decided to limit bitmaps to 16384x16384. This
is because calculating the size in bytes of a 32 bit 32768x32768 bitmap can
cause integer overflow, and because any number that large is more likely the
result of a bug than an actual dimension for a bitmap. Better safe than
sorry.
Actually you are sacrificing lots of (cutting-edge) functionality simply
because you don’t want to add error checking to your size calculation.
Remember you are forgetting about cases such as:
memory-mapped bitmaps (limited only by size of HDD)
oblong bitmaps 1x100000
Bitmaps are often not meant to be blitted to the screen in their entirety.
So I believe it is faulty thinking to assume that if a bitmap is much larger
than what current displays can show than it is a bug.
And this kind of erroneous thinking seems to be wide-spread. For example on GameDev.net it states:
“In the case of SDL_Rect, x and y are Sint16s, and so they range from -32768
to +32767, which is more than enough to deal with rectangular areas of the
screen.”
Actually you are sacrificing lots of (cutting-edge)
functionality simply because you don’t want to add error
checking to your size calculation.
If I wanted larger bitmaps (which I don’t - the largest bitmap I ever used
is the size of the screen, and the vast majority are 32x32), I could write a
wrapper class that constructs the large bitmap out of many small bitmaps.
Doing so would actually increase performance, since it would allow the small
bitmaps that are actually used to be loaded into video memory without
burdening the video memory with piles of unused data.
If you look at the TODO list, you will find
"In the jump from 1.2 to 1.3, we should change the SDL_Rect members to
int and evaluate all the rest of the datatypes."
But you should also keep in mind that this TODO list is more than 3 years
old …
Yesterday I was peacefully blitting parts from a nice big 50000x512 image
into my video card’s memory, when I noticed it would stop blitting when I
got to coords outside the range [-32767,32766].
Now what would cause that? Who describes memory offsets with 16-bit
integers these days??? In fact with memory sizes as big as they are these
days, 64-bit integers are becoming a necessity, what are 16-bit integers
still doing around?? Unbelievable.
If you look at the TODO list, you will find
"In the jump from 1.2 to 1.3, we should change the SDL_Rect members to
int and evaluate all the rest of the datatypes."
Yep. Actually, if anybody wants to make that change, I would appreciate it.
The tricky part is going through the blitters and making sure everything still
works. There’s a bunch of “let’s keep data types small” code that I wrote way
back when I didn’t realize the performance difference between various data
types.
Remember that ‘int’ may not be the same size on all architectures, so we
should either use a specific (larger) size, or note that the size of the
rect structure and the sizes of areas they can represent may vary based
on the size of an int on that platform.
But you should also keep in mind that this TODO list is more than 3 years
old …
Shh!
-Sam Lantinga, Software Engineer, Blizzard Entertainment
But you should also keep in mind that this TODO list is more than 3 years
old …
Shh!
-Sam Lantinga, Software Engineer, Blizzard Entertainment
Hey, Sam – I just got World of Warcraft, and for fun I watched the
credits. I saw your name and was like “Holy COW!!! I get emails from
that guy!!” Anyways, awesome job (so far hehe ^^), and thank whoever
put in the special thanks to Linkin Park for me! ^^
Remember that ‘int’ may not be the same size on all architectures, so we
should either use a specific (larger) size, or note that the size of the
rect structure and the sizes of areas they can represent may vary based
on the size of an int on that platform.
Changing this breaks binary compatibility with existing apps (including
Loki’s games), so we really shouldn’t change this in 1.2.x.
Remember that ‘int’ may not be the same size on all architectures, so we
should either use a specific (larger) size, or note that the size of the
rect structure and the sizes of areas they can represent may vary based
on the size of an int on that platform.
Changing this breaks binary compatibility with existing apps (including
Loki’s games), so we really shouldn’t change this in 1.2.x.
Remember that ‘int’ may not be the same size on all architectures, so we
should either use a specific (larger) size, or note that the size of the
rect structure and the sizes of areas they can represent may vary based
on the size of an int on that platform.
I spent several hours today going through the SDL .h files. The usage of
int versus specific types is not consistent. It looks to me like it
would be a good idea to go through all the .h files and make the
consistent. Personally, I do not like to ever use int.
There are also a few cases, such as the value returned by a joystick
axis, where the value is currently returned as an integer, when it makes
more sense (at least to me) to return a floating point value. SDL is
showing the signs of having grown, a lot, and needs a little clean up
and polishing.
Changing this breaks binary compatibility with existing apps (including
Loki’s games), so we really shouldn’t change this in 1.2.x.
I’m pretty sure any change like this would show up in 2.0. It might show
up in 1.3 version, but I doubt it.
Bob PendletonOn Wed, 2004-12-01 at 17:48, Simon wrote: