Blitting algos

Hi there,
first I’d like to apologize for my lack of understanding of the
blitting process with SDL. Since I’ve been using SDL for a long time,
just to jump into opengl. I’ve been using SDL only for window
management, user input and networking (using sdl_net). Now, I got my
hands on some data I need to draw on screen, it consists of points of
colors, pixels if you like, but they may not be together, rather
randomly placed on the whole surface. The result might remind one of
some falling snow, except that the snow of pixels will be coming in
waves and at a very high rate. At this rate, opengl is not making it
into the competition, i need faster access and I know sdl can be much
better in some 2D circumtances.

At every iteration some points will appear and others will disappear,
now, I’m wondering what is the best way to implement a refresh screen
algorythm. The problem is that some of these algos will work better
than the other depending on the amount of changing pixels.

The first algo i will use, which will probably be what I’ll base
myself on when comparing other algorythms, the first one will be using
the putpixel function, one pixel at a time, then probably calling a
refresh on the whole screen for simplicity.

For my next algo, I was thinking on pasting all that on a surface and
making a mask for this surface before blitting the surface on the screen
surface. Now I’ve never done any of this and I’ll probably have a hard
time to deal with my data. Maybe I can have my function spit out the
pixels in the surface with the correct format already. Right now it is
a one dimensional array of unsigned char[WIDTH * HEIGHT * 3], and the
colors are aligned as {RGBRGBRGB…} one byte per color, 3 bytes per pixel.

Best would be to find a way to get this alignment for the pixels of
the screen surface, and then I can forget about blitting and simply
write the output of my calculation at the right place in the screen’s
pixels.

I’ll probably need a lot of help, so if you’ve got any experience of
this kind of process, please share what you think.

Thanks,
Simon

You’re probably best off writing directly to a screen surface in
hardware mode with double-buffering, if you can get it - either that or
blit a software surface to a screen surface-
use the MapRGB function to convert to the desired RGB value-
you can set a separate pointer to the initial index of the screen
surface: e.g:

SDL_Pixel *initial_pixel;
SDL_Surface *screen;
SDL_Color color;
SDL_CreateRGBSurface(screen, etc etc
initial_pixel = screen->pixels;
Then increment the screen pointer to get to the next pixel, within a
couple of for loops:

for (y = 0; y < y_max; y++)
for (x = 0; x < x_max; x++)
{
*screen->pixels = MapRGB(screen->format, r, g, b);
screen->pixels++;
}
}
then you can reset the screen->pixels back to initial_pixel.
And blit the surface to
Probably a few things I’ve forgotten here - but, oh well.
The x & y are unneeded unless you’re plotting some function of x & y;
you could have for(a = 0; a < x_max * y_max; a++) instead-
Other’s will come up with some good examples I’m sure-=
the downloadable documentation is worth looking at, but the online stuff
is, well, old.
Oh, and for individual pixels just use the built-in putpixel function in
SDL-
Cheers,
M

Simon wrote:> Hi there,

first I’d like to apologize for my lack of understanding of the
blitting process with SDL. Since I’ve been using SDL for a long time,
just to jump into opengl. I’ve been using SDL only for window
management, user input and networking (using sdl_net). Now, I got my
hands on some data I need to draw on screen, it consists of points of
colors, pixels if you like, but they may not be together, rather
randomly placed on the whole surface. The result might remind one of
some falling snow, except that the snow of pixels will be coming in
waves and at a very high rate. At this rate, opengl is not making it
into the competition, i need faster access and I know sdl can be much
better in some 2D circumtances.

At every iteration some points will appear and others will disappear,
now, I’m wondering what is the best way to implement a refresh screen
algorythm. The problem is that some of these algos will work better
than the other depending on the amount of changing pixels.

The first algo i will use, which will probably be what I’ll base
myself on when comparing other algorythms, the first one will be using
the putpixel function, one pixel at a time, then probably calling a
refresh on the whole screen for simplicity.

For my next algo, I was thinking on pasting all that on a surface and
making a mask for this surface before blitting the surface on the screen
surface. Now I’ve never done any of this and I’ll probably have a hard
time to deal with my data. Maybe I can have my function spit out the
pixels in the surface with the correct format already. Right now it is
a one dimensional array of unsigned char[WIDTH * HEIGHT * 3], and the
colors are aligned as {RGBRGBRGB…} one byte per color, 3 bytes per pixel.

Best would be to find a way to get this alignment for the pixels of
the screen surface, and then I can forget about blitting and simply
write the output of my calculation at the right place in the screen’s
pixels.

I’ll probably need a lot of help, so if you’ve got any experience of
this kind of process, please share what you think.

Thanks,
Simon


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Arr, a few things I got wrong (had to look at code on unplugged machine-):

SDL_Surface *screen;
screen = SDL_SetVideoMode(etc etc etc

Uint32 *screen_buffer_pointer = (Uint32 *)screen->pixels;
Uint32 *initial_screen_buffer_pointer = screen_buffer_pointer;
SDL_PixelFormat *screen_format = screen->format;

Then proceed with the for loop, substituting the above- you don’t need
the color variable I mentioned earlier.
Cheers,
M at t

Simon wrote:> Hi there,

first I’d like to apologize for my lack of understanding of the
blitting process with SDL. Since I’ve been using SDL for a long time,
just to jump into opengl. I’ve been using SDL only for window
management, user input and networking (using sdl_net). Now, I got my
hands on some data I need to draw on screen, it consists of points of
colors, pixels if you like, but they may not be together, rather
randomly placed on the whole surface. The result might remind one of
some falling snow, except that the snow of pixels will be coming in
waves and at a very high rate. At this rate, opengl is not making it
into the competition, i need faster access and I know sdl can be much
better in some 2D circumtances.

At every iteration some points will appear and others will disappear,
now, I’m wondering what is the best way to implement a refresh screen
algorythm. The problem is that some of these algos will work better
than the other depending on the amount of changing pixels.

The first algo i will use, which will probably be what I’ll base
myself on when comparing other algorythms, the first one will be using
the putpixel function, one pixel at a time, then probably calling a
refresh on the whole screen for simplicity.

For my next algo, I was thinking on pasting all that on a surface and
making a mask for this surface before blitting the surface on the screen
surface. Now I’ve never done any of this and I’ll probably have a hard
time to deal with my data. Maybe I can have my function spit out the
pixels in the surface with the correct format already. Right now it is
a one dimensional array of unsigned char[WIDTH * HEIGHT * 3], and the
colors are aligned as {RGBRGBRGB…} one byte per color, 3 bytes per pixel.

Best would be to find a way to get this alignment for the pixels of
the screen surface, and then I can forget about blitting and simply
write the output of my calculation at the right place in the screen’s
pixels.

I’ll probably need a lot of help, so if you’ve got any experience of
this kind of process, please share what you think.

Thanks,
Simon


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Matt Bentley wrote:

Arr, a few things I got wrong (had to look at code on unplugged machine-):

SDL_Surface *screen;

screen = SDL_SetVideoMode(etc etc etc

Uint32 *screen_buffer_pointer = (Uint32 *)screen->pixels;
Uint32 *initial_screen_buffer_pointer = screen_buffer_pointer;
SDL_PixelFormat *screen_format = screen->format;

Then proceed with the for loop, substituting the above- you don’t need
the color variable I mentioned earlier.
Cheers,
M at t

Thanks a lot Matt,
written like this it looks so much more simple than the 30+ lines of
code I was trying to correct! :wink:

I’m thinking about another detail. Is it possible to format the pixels
in a surface the same was as opengl likes it?

Let say I have a surface of 512x512 @ 32bpp (RGBA). For opengl, I would
need this data to be packed like this [RGBARGBA…], another interesting
packing might be [RGBRGB…](without alpha)

This way if I can set my surface to the correct format, I’ll be able to
experiment at blitting with sdl or using gl for the refresh, and compare
both techniques…

It’s just that I never really understood the formatting with SDL…

But thanks a lot, I’ll try that bit of code you gave me and see how it
works.

Thanks,
Simon

Simon <simon.xhz at gmail.com> wrote:

I’m thinking about another detail. Is it possible to format the
pixels in a surface the same was as opengl likes it?

Let say I have a surface of 512x512 @ 32bpp (RGBA). For opengl, I
would need this data to be packed like this [RGBARGBA…], another
interesting packing might be [RGBRGB…](without alpha)

that’s why SDL_CreateRGBSurface takes mask parameters at the and of its
argument list.

for RGBA

SDL_CreateRGBSurface(SDL_SWSURFACE, 512, 512, 32, 0xff000000,
0x00ff0000,
0x0000ff00,
0x000000ff);

and RGB

SDL_CreateRGBSurface(SDL_SWSURFACE, 512, 512, 32, 0xff000000,
0x00ff0000,
0x0000ff00,
0);

IIRC, opengl is big endien so the masks may be twisted. but you get the
idea.

best regards …
clemens

Thanks a lot for the very visual examples, I’ll be able to experiment
quickly.

IIRC, opengl is big endien so the masks may be twisted. but you get the
idea.

I’ve figured that out already, but I’m heading exactly this way though.
My goal is to create a couple of bridge functions that will convert
the data of an image from one specific format to the other. Making it
easy to transfer an image manipulated by a library back to another
library for further manipulation. EG: render of an object by OpenGL,
transfer the pixels in SDL and post-process the image in plain SDL. Then
send it to the libPNG library to save this data as a 16bit per color,
RGBA PNG! (Can SDL support 16bit per color, instead of the usual 8?)

Thanks,
Simon

Simon <simon.xhz at gmail.com> wrote:

Then send it to the libPNG library to save this data as a
16bit per color, RGBA PNG! (Can SDL support 16bit per color, instead
of the usual 8?)

no. SDL1.2 definitly not, as a pixel is max 32 bit deep. if there is
a 64 bit pixelformat planned for 1.3, i dunno.

best regards …
clemens

FYI 32 bit color already exceeds the capabilities of the human eye.

I’ve heard 64 bit color calculations are used sometimes on video cards to
prevent round off or some such, but 32 bit color is more colors than the eye
can see so I don’t think there is going to be ever a 64 bit display setting
:P> ----- Original Message -----

From: sdl-bounces+atrix2=cox.net@libsdl.org
[mailto:sdl-bounces+atrix2=cox.net at libsdl.org] On Behalf Of Clemens
Kirchgatterer
Sent: Sunday, August 27, 2006 12:25 AM
To: sdl at libsdl.org
Subject: Re: [SDL] Blitting algos…

Simon <simon.xhz at gmail.com> wrote:

Then send it to the libPNG library to save this data as a
16bit per color, RGBA PNG! (Can SDL support 16bit per color, instead
of the usual 8?)

no. SDL1.2 definitly not, as a pixel is max 32 bit deep. if there is
a 64 bit pixelformat planned for 1.3, i dunno.

best regards …
clemens


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

The human eye cut of is at about 10bits per component without the need
for lighting corrections and at 12 bit’s per component for lighting
correction or colorspace conversion. So the 10,10,10,2 RGBA format is
quite adequate. SDL 1.3 has support for the pixel format above.
I’m not sure if 1.2 supports bigger than 8 bits per component formats
though.

Quality digital cameras use 36bit per pixel compressed RAW format for
example.On Sunday 27 August 2006 11:08, Alan Wolfe wrote:

FYI 32 bit color already exceeds the capabilities of the human eye.

I’ve heard 64 bit color calculations are used sometimes on video
cards to prevent round off or some such, but 32 bit color is more
colors than the eye can see so I don’t think there is going to be
ever a 64 bit display setting

FYI 32 bit color already exceeds the capabilities of the human eye.

I believe the idea was “your eye can distinguish X colors…but we don’t
know exactly which X they are.” :slight_smile:

–ryan.

FYI 32 bit color already exceeds the capabilities of the human eye.

I’ve heard 64 bit color calculations are used sometimes on video
cards to prevent round off or some such, but 32 bit color is more
colors than the eye can see so I don’t think there is going to be
ever a 64 bit display setting

The human eye cut of is at about 10bits per component without the need
for lighting corrections and at 12 bit’s per component for lighting
correction or colorspace conversion. So the 10,10,10,2 RGBA format is
quite adequate. SDL 1.3 has support for the pixel format above.
I’m not sure if 1.2 supports bigger than 8 bits per component formats
though.

Quality digital cameras use 36bit per pixel compressed RAW format for
example.

Two different topics. To create an image that looks “perfect” you need
about 10 bits per channel. OTOH, when doing arithmetic on 10 bit
fractions (which is what the channels are) you need (rule of thumb)
another 10 bits to prevent round off errors from creeping in and messing
up your image. That is why the accumulation buffer is 16,16,16,16 when
display buffers are 8,8,8,8. Of course, the more complex the computation
the more extra bits you need to avoid round off problems so even when
the display is 8,8,8,8 you might need 32,32,32,32 for intermediate
buffers.

I expect to see 64 bit (16,16,16,16) display buffers simply because they
will give you great pictures while greatly simplifying all the graphics
algorithms. We might even see 96 bit display buffers with the channels
stored as either 32 bit fixed point (as is done now) or 32 bit floating
point. I also expect that 4 to or 16 to 1 buffer pixel to display pixels
ratios will become the norm so that we get full anti-aliasing through
the output channel of the display card as the default.

What else will you do with a multi tera-op/flop GPU with multi-gigabytes
of display memory? Well, yeah, real time ray tracing will get added in
too.

Bob PendletonOn Sun, 2006-08-27 at 14:12 +0300, Sami N??t?nen wrote:

On Sunday 27 August 2006 11:08, Alan Wolfe wrote:


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

What else will you do with a multi tera-op/flop GPU with multi-gigabytes
of display memory? Well, yeah, real time ray tracing will get added in
too.

Hey you stole my idea! lol

For this (extremely large project), I intend to overuse opengl (and sdl
of course), but later (actually might have time for this today), I will
incorporate some sort of binding to use POV-Ray (see povray.org, a
famous free raytracer). The binding is to make sure not to break in any
way the license, so you will need it pre-installed and configured and
tell my software where it is.

If I remember correctly, POV-Ray supports rendering in PNG 64bits
(16bits per channel), it is a custom extension for the PNG output.

This integration will permit, for example, to raytrace changing
landscapes as heightfields using the tools in povray (you could make a
blob, an isosurface, etc…), or could be used to raytrace textures
instead of loading them from disk.

At last, here’s where realtime raytracing comes in, I expect to be able
to use povray to do some basic realtime raytraced effects that could
overlay an OpenGL scene (think of a lensflare inside a church, where the
lensflare is deformed by the “colored-windows”(?))… We’ll need more
hardware to acheive true realtime raytracing, but we can start walking
toward it now! :wink:

Thanks for all your help, unfortunately, I was having lots of problems
with the generation of the image I need to blit (to the point where I
would believe my blit is good, but the generation is not, so I was
possibly blitting a black surface or something…)

I’m in the process of starting a small tutorial website, a potpourri of
different flavors of programming, mostly all web-related (php/asp), but
I will also include interesting code in C/C++, I’ll keep you posted if I
finish any of these “little things”!

Thanks again,
Simon

The human eye cut of is at about 10bits per component without the need
for lighting corrections and at 12 bit’s per component for lighting
correction or colorspace conversion. So the 10,10,10,2 RGBA format is
quite adequate. SDL 1.3 has support for the pixel format above.
I’m not sure if 1.2 supports bigger than 8 bits per component formats
though.

The reason I need more than 8 bits per colors is for rare instances, the
one I see clearly right now is for heightmap, where one component
(grayscale) needs to have more than 256 degrees of elevation, in this
case 16 bits is very good and gives a smoother landscape.

But I’ve started using all of what an image can provide and thus on a
give mapfile (a png for example) it can contain the height map on the
red color, vegetation information on the green (types and quantities in
this pixel area), water information on the blue, atmospheric effects on
the alpha… etc… In this case it might be interesting to have a
different amount of bits for each component, as some require more
precision than others.

Of course, imo, when going higher than 8-10 bits per pixel, the image is
no longer for the human eye, but as many of you pointed out, it is
rather for the computer, for its calculations and gives more precision
to the result of those calculations.

Thanks,
Simon