Simple Questions

Hi All!

Just started using SDL and have a number of simple questions to ask.

  1. Got demos to compile ETC. When running fire, I got a blank screen.
    Further investigation led me to believe that all video modes I wanted SDL
    to use would have to be specified in XF86Config. Right?

  2. Some demos would only work if I was in 16BPP instead of 8BPP. If I
    wanted to write something that used whatever BPP the XServer was using
    what process would I need to take to do it?

  3. The examples in the Video API demonstrate how to fade in/out using a
    palleteized mode. How would you achieve a similar effect in 16/24/32 bit
    modes?

  4. Is there any way of determining the video card’s 16Bit RGB Mask?
    Presumably
    you’d need this to do pallette operations in 16BPP mode.

Any help on these topics would be VERY appreciated. Sorry for bothering you
guys with such simple trivia.

Regards,

-Lea.

Hi All!

Just started using SDL and have a number of simple questions to ask.

  1. Got demos to compile ETC. When running fire, I got a blank screen.
    Further investigation led me to believe that all video modes I wanted
    SDL
    to use would have to be specified in XF86Config. Right?

  2. Some demos would only work if I was in 16BPP instead of 8BPP. If I
    wanted to write something that used whatever BPP the XServer was using
    what process would I need to take to do it?

  3. The examples in the Video API demonstrate how to fade in/out using a
    palleteized mode. How would you achieve a similar effect in 16/24/32 bit

modes?

  1. Is there any way of determining the video card’s 16Bit RGB Mask?
    Presumably
    you’d need this to do pallette operations in 16BPP mode.

Any help on these topics would be VERY appreciated. Sorry for bothering
you
guys with such simple trivia.

Regards,

-Lea.

PS: If you’ve read this twice then it’s because I hit the wrong button.
Sorry.

Lea Anthony wrote:

Hi All!

Just started using SDL and have a number of simple questions to ask.

  1. Got demos to compile ETC. When running fire, I got a blank screen.
    Further investigation led me to believe that all video modes I wanted SDL
    to use would have to be specified in XF86Config. Right?

I don’t know what the problem with this is… But then again I haven’t
really used SDL/DGA. Windowed performance seems to be okay as far as
I’m concerned.

  1. Some demos would only work if I was in 16BPP instead of 8BPP. If I
    wanted to write something that used whatever BPP the XServer was using
    what process would I need to take to do it?

Well, if you want to support any BPP the XServer can possibly use, then
DON’T USE COLOR! You could have an X server running in bilevel mode!
That’s the only fool-proof way of doing it. But if you mean sane
video modes, (i.e. BPP >= 8), then your best bet is to not use any of
the special properties these video modes have. Use the surface purely
as a frame buffer. Don’t use palette tricks, such as palette cycling
animations or fades, not unless you want to write nontrivial extra code
to simulate such effects (see my answer to your next question for an
idea of how nontrivial the extra code can be). Don’t use alpha channels
if you want your program to work on an 8-bit palette, and so on.
Simulating general alpha/transparency effects is technically impossible
on an 8-bit palette…

Converting graphic art to and from video modes can get complicated. For
conversions between true-color modes (i.e. >= 15 bpp, without loss of
generality), it’s relatively trivial to do. All you need to do is
repack the pixels from the source to the destination pixel formats.
8-bit palettized art is also easy to convert, all you need to do is just
undo the indexing of colors. That is, if again you don’t perform any
palette tricks. But going from a true color to 8-bit, ahh, that’s much
more difficult, and is not practical to do in real-time. You’ll need a
palette quantization algorithm, and if the number of distinct colors in
your image is much greater than 256, results may look ugly. I have a
few old issues of DDJ and Game Developer magazine that describe some of
these algorithms, but none of these seems to be suitable for real-time
conversion.

  1. The examples in the Video API demonstrate how to fade in/out using a
    palleteized mode. How would you achieve a similar effect in 16/24/32 bit
    modes?

This is not very easy to do, but it is possible. You should find a way
of converting RGB tuples to HSV first, convert every pixel onscreen to
HSV (hue-saturation-value), lower or raise the V value until each pixel
either drops to zero or becomes the final value of the image. Best to
do this with double buffering, but it’s not easy to make a good-looking
fast fade. If you need code to convert RGB to HSV I have some tucked
away somewhere (it is in the new version of the small drawing library
I’m writing), based on an algorithm given by Foley and van Dam in their
book “Computer Graphics: Principles and Practice” (ISBN 0-201-12110-7).
No serious graphics programmer should be without this book! Mail me a
copy if you want some of my LGPL code.

  1. Is there any way of determining the video card’s 16Bit RGB Mask?
    Presumably
    you’d need this to do pallette operations in 16BPP mode.

The format member of the SDL_Surface. I explained how these work in
another posting I just sent in today (in my answer to ‘16bpp formats
question’ by Frank J. Ramsay).

I wonder why a lot of people new to the library seem to worry so much
about the details of low-level graphics programming under SDL (like
pixel formats). The main purpose of the library is to isolate the
programmer from such details…–

| Rafael R. Sevilla @Rafael_R_Sevilla |
| Instrumentation, Robotics, and Control Laboratory |

College of Engineering, University of the Philippines, Diliman

“Rafael R. Sevilla” wrote:

Well, if you want to support any BPP the XServer can possibly use, then
DON’T USE COLOR! You could have an X server running in bilevel mode!
That’s the only fool-proof way of doing it. But if you mean sane
video modes, (i.e. BPP >= 8), then your best bet is to not use any of
the special properties these video modes have. Use the surface purely
as a frame buffer. Don’t use palette tricks, such as palette cycling
animations or fades, not unless you want to write nontrivial extra code
to simulate such effects (see my answer to your next question for an
idea of how nontrivial the extra code can be). Don’t use alpha channels
if you want your program to work on an 8-bit palette, and so on.
Simulating general alpha/transparency effects is technically impossible
on an 8-bit palette…

Don’t use colour? Smart idea! :slight_smile: Seriously though, I get what you’re saying and
I already know this stuff. Thanks anyway.

Converting graphic art to and from video modes can get complicated. For
conversions between true-color modes (i.e. >= 15 bpp, without loss of
generality), it’s relatively trivial to do. All you need to do is
repack the pixels from the source to the destination pixel formats.
8-bit palettized art is also easy to convert, all you need to do is just
undo the indexing of colors. That is, if again you don’t perform any
palette tricks. But going from a true color to 8-bit, ahh, that’s much
more difficult, and is not practical to do in real-time. You’ll need a
palette quantization algorithm, and if the number of distinct colors in
your image is much greater than 256, results may look ugly. I have a
few old issues of DDJ and Game Developer magazine that describe some of
these algorithms, but none of these seems to be suitable for real-time
conversion.

I thought SDL did this for you? Am I wrong? What does SDL_DisplayFormat do?

This is not very easy to do, but it is possible. You should find a way
of converting RGB tuples to HSV first, convert every pixel onscreen to
HSV (hue-saturation-value), lower or raise the V value until each pixel
either drops to zero or becomes the final value of the image. Best to
do this with double buffering, but it’s not easy to make a good-looking
fast fade. If you need code to convert RGB to HSV I have some tucked
away somewhere (it is in the new version of the small drawing library
I’m writing), based on an algorithm given by Foley and van Dam in their
book “Computer Graphics: Principles and Practice” (ISBN 0-201-12110-7).
No serious graphics programmer should be without this book! Mail me a
copy if you want some of my LGPL code.

I have this book. I could be wrong but isn’t fading a bitmap in as simple as
multiplying each pixel RGB value by an alpha level between 0 and 1? That’s
got to work! OK, it’d be slow but…

I wonder why a lot of people new to the library seem to worry so much
about the details of low-level graphics programming under SDL (like
pixel formats). The main purpose of the library is to isolate the
programmer from such details…

OK, but there will be visual effects that you cannot hope to achieve without
low level pixel manipulation.

Thanks for your time.

-Lea.

Lea Anthony wrote:

“Rafael R. Sevilla” wrote:

your image is much greater than 256, results may look ugly. I have a
few old issues of DDJ and Game Developer magazine that describe some of
these algorithms, but none of these seems to be suitable for real-time
conversion.

I thought SDL did this for you? Am I wrong? What does SDL_DisplayFormat do?

SDL will not perform color quantization for you, if that’s what you’re
asking. I don’t think SDL_DisplayFormat can convert a true-color
surface into an indexed color while still keeping the same info, from a
cursory look at the code for it in ‘SDL_surface.c’ and ‘SDL_video.c’.
There appears to be no built-in palette quantization code. But I think
it should be able to do the other types of conversions I’ve described.
Using it to convert a true-color surface to indexed color will probably
trash the original image.

This is not very easy to do, but it is possible. You should find a way
of converting RGB tuples to HSV first, convert every pixel onscreen to
HSV (hue-saturation-value), lower or raise the V value until each pixel
either drops to zero or becomes the final value of the image. Best to
do this with double buffering, but it’s not easy to make a good-looking
fast fade. If you need code to convert RGB to HSV I have some tucked
away somewhere (it is in the new version of the small drawing library
I’m writing), based on an algorithm given by Foley and van Dam in their
book “Computer Graphics: Principles and Practice” (ISBN 0-201-12110-7).
No serious graphics programmer should be without this book! Mail me a
copy if you want some of my LGPL code.

I have this book. I could be wrong but isn’t fading a bitmap in as simple as
multiplying each pixel RGB value by an alpha level between 0 and 1? That’s
got to work! OK, it’d be slow but…

Good for you, you have that book. Well, multiplying each pixel RGB by
an alpha level, well, um, that could work, but it would also cause color
cycling artifacts as well. Converting to HSV first and then back to RGB
keeps the hue and saturation constant. If your image is very bright,
and your fade on the slow side, such color changes by using a
multiplicative alpha would be noticeable. In short, it’s not a fade
like you would see in a movie, for instance. May be good for doing fast
fades, though; the artifacts would not be as noticeable. As Foley and
van Dam say: “If it looks good, do it!”–

| Rafael R. Sevilla @Rafael_R_Sevilla |
| Instrumentation, Robotics, and Control Laboratory |

College of Engineering, University of the Philippines, Diliman