1.3 GL renderer data corruption bug

I’ve been tracking this down for days, and I finally think I’ve found the root
of the problem.

In my editor, I’ve got two SDL 1.3 windows on the same form. One displays the
map the other displays the tile palette. When I switch from one map to another,
it loads a new map, then loads the palette with the tileset for the new map. At
certain times, for no easily identifiable reason, (apparently it’s something
going on inside the OpenGL driver,) when I switch maps, the internal tile data
in the map window would get corrupted and I’d end up drawing blocks of pure
white to the screen for some tiles, instead of drawing the map.

At first I thought it was because the map window was being resized in some
cases, but I was able to rule that out. Then I thought it might be related to
the multiple windows, so I commented out the code that reloaded the palette, and
the white tiles bug stopped. After a bit more digging and selective commenting,
I found out that it was coming from the code that deletes the textures attached
to the palette window. Apparently if I call SDL_DestroyTexture on a texture
that belongs to the palette window without calling SDL_SelectRenderer on the
palette window first, it will corrupt data in the map window.

It would seem that it’s only safe to delete textures belonging to the currently
active window. But SDL_DestroyTexture doesn’t have a return value, so there’s
no way to check that and return an error code. Can someone look at this?

Hello good people!

Could anyone please point me to a previous thread or wiki article, or tutorial etc, that talks about direct pixel manipulation with SDL?

I mean, how easy or is SDL even the best approach to, say, the Demo Scene, for example… instead of “simply” copying and pasting a bitmap on top of each other, like we all do =) how would you proceed on a lower level of control over your images and animations, and would that be efficient?

For example, another problem is the bitmap itself, I dont think we would be able to use any other image format to do this, since they’re mostly compressed, making it harder to access individual pixels… hence we would probably have to intercept SDL after it extracts it to memory anyway, thus it might not be so hard but, on the other hand, it perhaps might break portability?

Sorry if this question is somehow inappropriate, I find it interesting though =) and seems seldom discussed, couldn’t find much on the subject… =( so, any pointers in the right direction?

Thanks =)
MM

Hi Marcos, you may find the source code of SDL_gfx a good start for the
learning some “pixel pushing” approaches as the lib implements a bunch
of graphics primitives and a rotozoomer by directly manipulating the
memory buffer of a surface (svn-browse directly here:
http://sdlgfx.svn.sourceforge.net/viewvc/sdlgfx/). Regarding the
efficiency of this approach, nothing beats dedicated graphics hardware
for obvious reasons, but the C-only-pixel-pushing approach has the
benefit of being very portable across architectures - even low-end ones.
–AndreasOn 11/3/10 6:37 AM, Marcos Marin wrote:

Hello good people!

Could anyone please point me to a previous thread or wiki article, or tutorial etc, that talks about direct pixel manipulation with SDL?

I mean, how easy or is SDL even the best approach to, say, the Demo Scene, for example… instead of “simply” copying and pasting a bitmap on top of each other, like we all do =) how would you proceed on a lower level of control over your images and animations, and would that be efficient?

For example, another problem is the bitmap itself, I dont think we would be able to use any other image format to do this, since they’re mostly compressed, making it harder to access individual pixels… hence we would probably have to intercept SDL after it extracts it to memory anyway, thus it might not be so hard but, on the other hand, it perhaps might break portability?

Sorry if this question is somehow inappropriate, I find it interesting though =) and seems seldom discussed, couldn’t find much on the subject… =( so, any pointers in the right direction?

Thanks =)
MM


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hello good people!

Could anyone please point me to a previous thread or wiki article, or
tutorial etc, that talks about direct pixel manipulation with SDL?

All you should need to get started is the documentation for SDL_Surface and
SDL_PixelFormat. The rest is just good old low level computer graphics! :wink:

The easiest way is to just ask for 24 or 32 bpp software surfaces. SDL will
usually be able to push that directly to the screen, and if not, it’ll convert
on the fly, possibly quick enough that your time is better spent optimizing
your 24/32 bpp code than supporting multiple pixel formats.

I mean, how easy or is SDL even the best approach to, say, the Demo
Scene, for example… instead of “simply” copying and pasting a bitmap on
top of each other, like we all do =) how would you proceed on a lower
level of control over your images and animations, and would that be
efficient?

Short version: PC hardware (since the 486 or so) is not designed for software
rendering.

Slightly longer version:

Since the PC architecture and video cards are designed for hardware
accelerated rendering, CPU access to VRAM is about as far from optimal as it
gets. The CPU will slow down to a crawl when doing it, no matter how fast it
is, or what video card you have.

That said, with the CPU power available these days, software rendering is
actually quite interesting, though the only realistic chances of getting the
output to the streen at a usable frame rate is through OpenGL or Direct3D.

And, unless you need scalability to ancient hardware as well, maybe it’s a
better idea to just implement your software rendering as pixel shaders under
OpenGL or Direct3D? That way, the work is done in the right place, by cost-
effective hardware specifically designed for that kind of work. (You’d need an
insanely fast CPU to come anywhere near the performance of an old budget 3D
accelerator.)

That said, ZeeSpace is all software, and it’ll probably stay that way for now.
Then again, I’m not planning on using it for any massive real time rendering;
only incremental background rendering and the occasional procedural texture.

For example, another problem is the bitmap itself, I dont think we would
be able to use any other image format to do this, since they’re mostly
compressed, making it harder to access individual pixels… hence we would
probably have to intercept SDL after it extracts it to memory anyway, thus
it might not be so hard but, on the other hand, it perhaps might break
portability?

There is no need to intercept! As long as you’re not using RLE
encoding/acceleration, the SDL surface is in precisely the format indicated by
it’s pixelformat field. That’s all there is to it, pretty much. :slight_smile:

Sorry if this question is somehow inappropriate, I find it interesting
though =) and seems seldom discussed, couldn’t find much on the subject…
=( so, any pointers in the right direction?

It’s certainly interesting, and it’s absolutely something you should try if
you’re serious about learning about graphics programming.

However, keep in mind that many of the methods and algorithms used for
"traditional" software rendering, and the related optimization tricks, aren’t
really up to date with how modern CPUs and computers work. And, hardware
accelerated rendering via high level APIs is a different beast entirely -
except for the pixel shader bit.On Wednesday 03 November 2010, at 14.37.48, Marcos Marin wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://olofson.net http://olofsonarcade.com http://kobodeluxe.com |
’---------------------------------------------------------------------’

Hi guys, thanks.
I guess the general rule is then “IF low level pixel manipulation by software is insufficient THEN go for openGL shaders EVEN if it’s 2D art”… right? =)

?Hi Marcos, you may find the source code of SDL_gfx a good start?
Cool, I’ll check it out! btw, I take a rotozoomer is a piece of code that is capable of rotating and enlarging (&shrinking of course, but thats easier to maintain quality =)) images?
benefit of being very portable across architectures - even low-end ones.–Andreas?>But… openGL is pretty portable right? Is my guideline flawed?! =) lol
?The rest is just good old low level computer graphics! :wink:
love?good ol’ low level computer graphics hehehe
possibly quick enough that your time is better spent optimizing your 24/32 bpp code than supporting multiple pixel formats.
sure, but I meant that, even if I supported more formats, say for saving space, it wouldn’t save space in the actual (v)ram since it would have to be extracted anyway before you can mess with the pixels… thus its of no much use anyway, as you say =) only perhaps for saving loading time? (in case decompression is quicker than HD, which it is in most cases, right?)
maybe it’s a better idea to just implement your software rendering as pixel shaders under OpenGL or Direct3D?
Hmm… then the rule is, IF it becomes an issue, go for openGL (yes, im prejudiced against directX lol), but, as you mentioned shaders… how precise are they? I mean, can I go to the level of the individual pixel, or will it invariably mess with my logic, such as in interpolation, etc… less importantly (just curious =) in case you know), can 2D only art bypass the 3D only steps of the pipeline in modern graphic cards?
indicated by it’s pixelformat field. That’s all there is to it, pretty much. :slight_smile:
Thus I could create a line, efficiently, the ol’way by editing pixels inside a surface and only THEN pasting that surface to the screen? (I dont remember if we have direct access to the screen surface in SDL) These efficiency questions are probably what Im going to learn by Andreas advice to check out the library code, right? =)
It’s certainly interesting, and it’s absolutely something you should try if you’re serious?>about learning about graphics programming.
Sure am =) thanks!
However, keep in mind that many of the methods and algorithms used for “traditional” software rendering, and the related optimization tricks, aren’t really up to date with how modern CPUs and computers work. And, hardware accelerated rendering via high level APIs is a different beast entirely - except for the pixel shader bit.>
I thought we were comparing shader with high level APIs (:S)
Once again thanks guys.
Cheers,MM

Hi,

Low level 2D graphics access is long gone. On modern graphics cards 2D means
3D with z = 0, so you always have
to steer the complete 3D pipeline.

Cheers,
PauloOn Fri, Nov 5, 2010 at 12:01 AM, Marcos Marin wrote:

Hi guys, thanks.

I guess the general rule is then “IF low level pixel manipulation by
software is insufficient THEN go for openGL shaders EVEN if it’s 2D art”…
right? =)

Hi Marcos, you may find the source code of SDL_gfx a good start

Cool, I’ll check it out! btw, I take a rotozoomer is a piece of code that
is capable of rotating and enlarging (&shrinking of course, but thats easier
to maintain quality =)) images?

benefit of being very portable across architectures - even low-end ones.
–Andreas

But… openGL is pretty portable right? Is my guideline flawed?! =) lol

The rest is just good old low level computer graphics! :wink:

love good ol’ low level computer graphics hehehe

possibly quick enough that your time is better spent optimizing your
24/32 bpp code than supporting multiple pixel formats.

sure, but I meant that, even if I supported more formats, say for saving
space, it wouldn’t save space in the actual (v)ram since it would have to be
extracted anyway before you can mess with the pixels… thus its of no much
use anyway, as you say =) only perhaps for saving loading time? (in case
decompression is quicker than HD, which it is in most cases, right?)

maybe it’s a better idea to just implement your software rendering as
pixel shaders under OpenGL or Direct3D?

Hmm… then the rule is, IF it becomes an issue, go for openGL (yes, im
prejudiced against directX lol), but, as you mentioned shaders… how precise
are they? I mean, can I go to the level of the individual pixel, or will it
invariably mess with my logic, such as in interpolation, etc… less
importantly (just curious =) in case you know), can 2D only art bypass the
3D only steps of the pipeline in modern graphic cards?

indicated by it’s pixelformat field. That’s all there is to it, pretty
much. :slight_smile:

Thus I could create a line, efficiently, the ol’way by editing pixels
inside a surface and only THEN pasting that surface to the screen? (I dont
remember if we have direct access to the screen surface in SDL) These
efficiency questions are probably what Im going to learn by Andreas advice
to check out the library code, right? =)

It’s certainly interesting, and it’s absolutely something you should try
if you’re serious
about learning about graphics programming.

Sure am =) thanks!

However, keep in mind that many of the methods and algorithms used for
"traditional" software rendering, and the related optimization tricks,
aren’t really up to date with how modern CPUs and computers work. And,
hardware accelerated rendering via high level APIs is a different beast
entirely - except for the pixel shader bit.

I thought we were comparing shader with high level APIs (:S)

Once again thanks guys.

Cheers,
MM


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi,

hello Paulo

Low level 2D graphics access is long gone. On modern graphics cards 2D means
3D with z = 0, so you always have to steer the complete 3D pipeline.

I was studying a little tutorial here, at least exact pixelation is still possible even in openGL, as long as you draw 1 pixel unambiguously far away from an edge (e.g. coordinate 1,5). Furthermore it’s possible to disable the Z-Buffer too…(glDisable(GL_DEPTH_TEST))

… and it seems I can disable automatic anti-aliasing so opengl is even more OUT of my way =) At least some of it is still feasible =)

Cheers,
Paulo

Cheers,
MM

Well, you can implement an entire desktop (windows with compositing and all
that) over OpenGL. An application doing pixel level rendering is only a small
subset of that.

In most cases, it’s as simple as doing the software rendering into textures,
and then using rendering those at 1:1 scale with filtering disabled. Simple
and effective, and it works great for overlays over “real” OpenGL rendering.On Friday 05 November 2010, at 13.35.27, Marcos Marin wrote:

Hi,

hello Paulo

Low level 2D graphics access is long gone. On modern graphics cards 2D
means 3D with z = 0, so you always have to steer the complete 3D
pipeline.

I was studying a little tutorial here, at least exact pixelation is still
possible even in openGL, as long as you draw 1 pixel unambiguously far
away from an edge (e.g. coordinate 1,5). Furthermore it’s possible to
disable the Z-Buffer too…(glDisable(GL_DEPTH_TEST))

… and it seems I can disable automatic anti-aliasing so opengl is even
more OUT of my way =) At least some of it is still feasible =)


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://olofson.net http://olofsonarcade.com http://kobodeluxe.com |
’---------------------------------------------------------------------’

Hi guys, thanks.
I guess the general rule is then “IF low level pixel manipulation by
software is insufficient THEN go for openGL shaders EVEN if it’s 2D art”…
right? =)

Yeah, that’s basically it.

Actually, with anything but ancient hardware or the occasional oddball shared
memory solution, you’re probably better off using OpenGL (or Direct3D, if you
really need maximum compatibility on that other OS) for presenting your
software “frame buffer”, even if you’re doing 100% software rendering. Even
uploading and blitting the whole screen every frame tends to be faster than
any tricks you can play with a 2D API these days. Well, unless you only need
to update a small fraction of the screen area every frame, of course.

It’s the busmaster DMA transfers from system memory to VRAM you need, and old
drivers for obsolete APIs usually don’t provide those. Some do (DGA and
DirectDraw, I think), but they still seem to be a lot slower than the 3D APIs
for some reason.

[…]

possibly quick enough that your time is better spent optimizing your 24/32
bpp code than supporting multiple pixel formats.

sure, but I meant that, even if I supported more formats, say for saving
space, it wouldn’t save space in the actual (v)ram since it would have to
be extracted anyway before you can mess with the pixels… thus its of no
much use anyway, as you say =) only perhaps for saving loading time? (in
case decompression is quicker than HD, which it is in most cases, right?)

That depends… You can reduce the bandwidth significantly by using a 16 bpp
framebuffer instead 24/32 bpp, and that makes a big difference if you’re using
the CPU to push pixels into VRAM.

Obviously, it still makes a difference when using texture uploading with
OpenGL or Direct3D, though if the driver implements proper DMA transfers,
you’ll usually have all the bandwidth you need and then some anyway.

[…]

maybe it’s a better idea to just implement your software rendering as
pixel shaders under OpenGL or Direct3D?

Hmm… then the rule is, IF it becomes an issue, go for openGL (yes, im
prejudiced against directX lol),

Right; as far as I’m concerned, Direct3D is just a few hundred euros worth of
extra work for nothing, theoretically. It adds nothing but more code to
maintain.

However, massive forces are still pushing Direct3D, and over at the indiegamer
forums, successful developers are still warning about OpenGL on windows, at
least for casual games. I’m still not sure what the current situation actually
is like, but considering the rather non-casual nature of the game I’m working
on right now, I suspect my time is better spent on other things. If you’re
into puzzle games and the like, you might come to a different conclusion.

[…]

but, as you mentioned shaders… how
precise are they? I mean, can I go to the level of the individual pixel,
or will it invariably mess with my logic, such as in interpolation, etc…

Pixel shaders process on the framebuffer pixel level (anything else would be a
terrible waste of bandwidth), so you should have full control.

less importantly (just curious =) in case you know), can 2D only art
bypass the 3D only steps of the pipeline in modern graphic cards?

Yes and no. You can “blit” directly to the screen, but that’s an absolute last
resort. If it’s not properly optimized in the driver (DMA), it’ll be horribly
slow, and perhaps more importantly, it requires “hard sync” of the GPU and
CPU, which is a total waste of good cycles on both sides.

Basically: Don’t do that! :smiley:

indicated by it’s pixelformat field. That’s all there is to it, pretty
much. :slight_smile:

Thus I could create a line, efficiently, the ol’way by editing pixels
inside a surface and only THEN pasting that surface to the screen?

Theoretically, yes, but in the general case, that doesn’t exactly seem like
the most efficient way of doing it. Indeed, if you’re running massive particle
effects, they might be better off rendered as Wu-pixels into an RGBA OpenGL
texture - or why not an RGB texture with additive blending?

If you’re using the SDL 2D API, though, you’re probably better off doing all
rendering in a shadow surface. SDL 1.2 alpha blending is all software, and as
such, is a very bad idea to use directly in VRAM in the general case. (Reads
from VRAM tend to be many times slower than writes - and writes are pretty
slow already.)

(I dont
remember if we have direct access to the screen surface in SDL)

You do, but see above, about reading VRAM in particular.

[…]

However, keep in mind that many of the methods and algorithms used for
"traditional" software rendering, and the related optimization tricks,
aren’t really up to date with how modern CPUs and computers work. And,
hardware accelerated rendering via high level APIs is a different beast
entirely - except for the pixel shader bit.>

I thought we were comparing shader with high level APIs (:S)

Uhm, well… GPUs are another beast entirely yet again - massive arrays of
small cores running in parallel. :slight_smile:

Anyway, I was rather thinking about old “standard” solutions like using look-
up tables to avoid expensive operations and that sort of stuff. The thing is,
while in the past, multiplication, division, floating point math and various
other things were anything from “expensive” (10x slower than an addition or
so) through “uselessly expensive” (hundreds or thousands of cycles per
calculation), whereas these days, most of those operations are single cycle,
while running out of cache memory can cost tens or hundreds of cycles per
access, making LUTs viable only for really expensive operations.

All that said, the usual rules apply; optimize on the highest levels first,
reducing the work you actually have to do, and then benchmark and tune as
needed for your actual target platforms.On Friday 05 November 2010, at 00.01.57, Marcos Marin wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://olofson.net http://olofsonarcade.com http://kobodeluxe.com |
’---------------------------------------------------------------------’

if you are rotating shrinking expanding etc, always have a larger than
required blurred up image, then you will find it retains its quality at
lower scales

just my two cents, i am gonna do some code now, honest.>

if you are rotating shrinking expanding etc, always have a larger than required blurred up?>image, then you will find it retains its quality at lower scales

Hi, yes, this makes sense, though there is probably a limit… but probably work for most practical purposes, thanks for the idea =)

just my two cents, i am gonna do some code now, honest.

I believe you =)
Cheers,MM— Em s?b, 20/11/10, Neil White escreveu:

De: Neil White
Assunto: Re: [SDL] Pixel Pushing
Para: “SDL Development List”
Data: S?bado, 20 de Novembro de 2010, 8:44

if you are rotating shrinking expanding etc, always have a larger than required blurred up image, then you will find it retains its quality at lower scales

just my two cents, i am gonna do some code now, honest.

-----Anexo incorporado-----


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

if you are rotating shrinking expanding etc, always have a larger than
required blurred up
image, then you will find it retains its quality at lower scales

Hi, yes, this makes sense, though there is probably a limit… but probably
work for most practical purposes, thanks for the idea =)

5 % larger than you intend the largest use of the the image i think, then
all the time the image is always scaled , if the image will be changing
scale all the time. would help with speed issues as well.On 22 November 2010 23:58, Marcos Marin wrote: