Lea Anthony wrote:
Just started using SDL and have a number of simple questions to ask.
- Got demos to compile ETC. When running fire, I got a blank screen.
Further investigation led me to believe that all video modes I wanted SDL
to use would have to be specified in XF86Config. Right?
I don’t know what the problem with this is… But then again I haven’t
really used SDL/DGA. Windowed performance seems to be okay as far as
- Some demos would only work if I was in 16BPP instead of 8BPP. If I
wanted to write something that used whatever BPP the XServer was using
what process would I need to take to do it?
Well, if you want to support any BPP the XServer can possibly use, then
DON’T USE COLOR! You could have an X server running in bilevel mode!
That’s the only fool-proof way of doing it. But if you mean sane
video modes, (i.e. BPP >= 8), then your best bet is to not use any of
the special properties these video modes have. Use the surface purely
as a frame buffer. Don’t use palette tricks, such as palette cycling
animations or fades, not unless you want to write nontrivial extra code
to simulate such effects (see my answer to your next question for an
idea of how nontrivial the extra code can be). Don’t use alpha channels
if you want your program to work on an 8-bit palette, and so on.
Simulating general alpha/transparency effects is technically impossible
on an 8-bit palette…
Converting graphic art to and from video modes can get complicated. For
conversions between true-color modes (i.e. >= 15 bpp, without loss of
generality), it’s relatively trivial to do. All you need to do is
repack the pixels from the source to the destination pixel formats.
8-bit palettized art is also easy to convert, all you need to do is just
undo the indexing of colors. That is, if again you don’t perform any
palette tricks. But going from a true color to 8-bit, ahh, that’s much
more difficult, and is not practical to do in real-time. You’ll need a
palette quantization algorithm, and if the number of distinct colors in
your image is much greater than 256, results may look ugly. I have a
few old issues of DDJ and Game Developer magazine that describe some of
these algorithms, but none of these seems to be suitable for real-time
- The examples in the Video API demonstrate how to fade in/out using a
palleteized mode. How would you achieve a similar effect in 16/24/32 bit
This is not very easy to do, but it is possible. You should find a way
of converting RGB tuples to HSV first, convert every pixel onscreen to
HSV (hue-saturation-value), lower or raise the V value until each pixel
either drops to zero or becomes the final value of the image. Best to
do this with double buffering, but it’s not easy to make a good-looking
fast fade. If you need code to convert RGB to HSV I have some tucked
away somewhere (it is in the new version of the small drawing library
I’m writing), based on an algorithm given by Foley and van Dam in their
book “Computer Graphics: Principles and Practice” (ISBN 0-201-12110-7).
No serious graphics programmer should be without this book! Mail me a
copy if you want some of my LGPL code.
- Is there any way of determining the video card’s 16Bit RGB Mask?
you’d need this to do pallette operations in 16BPP mode.
The format member of the SDL_Surface. I explained how these work in
another posting I just sent in today (in my answer to ‘16bpp formats
question’ by Frank J. Ramsay).
I wonder why a lot of people new to the library seem to worry so much
about the details of low-level graphics programming under SDL (like
pixel formats). The main purpose of the library is to isolate the
programmer from such details…–
| Rafael R. Sevilla @Rafael_R_Sevilla |
| Instrumentation, Robotics, and Control Laboratory |
|College of Engineering, University of the Philippines, Diliman