SDL Digest, Vol 49, Issue 38

SDL Digest, Vol 49, Issue 35

Message: 5
Message-ID: <AANLkTike=KoQpzy280Awc=o76oGgjirKarvtwMjbw6b3 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

To Mason, Kenneth and Nathaniel:
You need to go read the last remark in
http://wiki.libsdl.org/moin.cgi/SDL_CreateRGBSurfaceFrom
SDL_CreateRGBSurfaceFrom DOES NOT create a copy of the
pixel data. Thus, the most generic algorithm that
follows this formula is the one I described. Also, note
that I was very generic when talking about memory
management; in cases where the application supplies the
memory to the decoder, I would allocate space for the
frame once and just keep reusing it, but since the
original poster didn’t specify a movie format I couldn’t
really make any assumptions about how the memory would
be handled.

You still need to get the data onto the screen, which
means blitting from your surface to the screen. You
cannot use SDL_CreateRGBSurfaceFrom to turn a buffer
in memory into a hardware surface. What I suggested
is that you render directly to the screen to save
yourself a blit.
VGA-style stuff huh? Nice, but not quite right. Let’s examine some
issues with what you’re suggesting.

  1. What in the world gave you the idea that you have direct access to
    the actual hardware buffers? That’s DOS, and maybe Amiga style. It’s
    possible to implement windowing systems with direct hardware access,
    but it will almost always be rather ornate (because there can be other
    applications using the screen, so you have to avoid overwriting their
    areas), and for safety reasons it’s actually better to do precisely
    what you’re saying not to: add a blit! On the other hand, what if your
    video stream is actually a memory-mapped camera? You just told the
    library to perform a blit! On this alone, I call your argument bunk.
    Besides which, that sort of muckery belongs in the actual SDL library
    itself so that more eyes will look over it, since it can be extreemly
    sensitive to minor errors if you’re dealing with shared hardware.
  2. In 1.3, SDL_Surface structures no longer represent hardware
    surfaces, it adds textures instead. If you thought otherwise then you
    need to go read the documentation again. SDL 1.3 is designed with a
    focus on using GPU optimizations whenever it’s realistic. In cases
    where there isn’t a GPU, so everything has to be implemented in
    hardware, the algorithm I described does have a likely extraneous blit
    operation (the conversion to a texture), but on all ‘modern’ systems
    (by which I mean: systems that prevent you from writing directly to
    the screen so that you can’t screw around with other program’s
    windows) that will be the ONLY unneeded blit, and on ‘modern’ systems
    with a GPU it will actually be REQUIRED.

Message: 3

Yeah, you could do that, but that’s a huge amount of
memory churn, and would probably consume tons of CPU.
?I wouldn’t be surprised if it noticeably lagged the
system for high resolution videos. ?Is there any
native SDL-level support for video in 1.3?
There isn’t too much inherent memory churn in the
algorithm that I posted that’s actually avoidable.

When I talk about releasing the frame, I’m specifically
talking about informing decoder libraries that do their
own memory management that a particular frame is no
longer needed (if allowed to do it myself, I would only
allocate the memory once per movie play-through, but
some libraries may not work that way).

Then you do not use those libraries.
I may be prone to reinventing the wheel, but even I’m not bad enough
to replace a decoder library just because I don’t like the way you
interface with it. Write wrappers? Sure, and if I know of an
alternative with a better API then I may try to use that instead, but
rejecting a likely complex library on the basis of it’s API is too
much.

Besides which, what do you do when the library knows that certain
memory regions can be copied to memory faster than others, (e.g. a
decoder library that’s designed to decode movies inside of video
memory, so that the transfer only occures across the relatively fast
video-card bus), but your application is designed to be cross platform
(this might not even be standard hardware for the platform you
compiled for)? You might not see it now, but wait a few years and
you’ll probably see most of the decoders using GPU hardware to do the
actual decoding on the video card itself (whether you can actually
treat those examples as ordinary memory will admittedly vary).

In some cases you won’t need to allocate a new surface
for each frame, but unless you’re willing to alter
read-only members of the surface there will be some
formats that you’ll need to allocate a new surface for
(in most frames, those formats only encode the pixels
that have changed).
With those formats, you would use a back buffer and a
front buffer. On each frame you read from the back
buffer and write to the front buffer, then swap when
you’re done. You do not allocate a new buffer.

Odds are you won’t even need to do that. If only the
modified pixels are recorded, then you just write the new
pixels over the old ones. That can be done directly to
the screen. The only problem is when something
overwrites your window, but that’s fixed whenever the
encoder hits an I-frame.
I’d just create a new texture with the new pixels, render that over
the old texture (multiple times, in some cases), and (assuming that
the API isn’t one of the parred-down ones) copy the result into the
old texture to prepare for the next frame. Remember: GPU textures.

If the hardware you’re targeting doesn’t support a GPU
then you’ll be better off avoiding the
surface-to-texture copy by rendering the surface
directly to the window, but I don’t think SDL 1.3
actually supports this.

SDL 1.3 still supports the old surface API, which does
support this.

SDL_RenderWritePixels can be used to copy from an
SDL_Surface or some other memory buffer to the screen
if you’re really determined to do it your way.
I’ll have to go look that up.

Message: 4

For video, you’re probably better off passing
screen->pixels and screen->pitch to the decoder to
render each frame, then draw your stuff on top if
necessary. ?No blits that way, just rendering the
video.
Outside of the surface-to-texture transition, there
aren’t any INHERENT blits with the algorithm I
described, because that particular SDL function uses the
pixel data that you provide it instead of allocating
some itself. That’s why I said to release the frame
AFTER you generate the texture (particularly since you
have no idea what the decoder will do with that memory
if the decoder originally allocated it). I suspect that
the function was originally created for precisely this
sort of scenario (either that, or images embedded in
your executable).

Again, you still have to get your data onto the screen,
and any decoder that doesn’t let you use your own buffer
is junk.
Again, GPUs that require you to upload your images to them before you
can get them onto the screen make this a moot point, since you have to
perform a blit either way.> Date: Wed, 12 Jan 2011 07:31:41 -0500
From: Kenneth Bull
To: SDL Development List
Subject: Re: [SDL] About SMPEG library
On 12 January 2011 00:32, Jared Maddox <@Jared_Maddox> wrote:

Date: Tue, 11 Jan 2011 14:45:57 -0800 (PST)
From: Mason Wheeler
Date: Tue, 11 Jan 2011 18:49:35 -0500
From: Kenneth Bull

SDL Digest, Vol 49, Issue 38

Message: 7
Date: Wed, 12 Jan 2011 19:56:27 -0800 (PST)
From: Mason Wheeler
To: SDL Development List
Subject: Re: [SDL] About SMPEG library
Message-ID: <250028.35039.qm at web161208.mail.bf1.yahoo.com>
Content-Type: text/plain; charset=us-ascii

From: Rainer Deyke
Subject: Re: [SDL] About SMPEG library

On 1/12/2011 05:31, Kenneth Bull wrote:

You still need to get the data onto the screen, which
means blitting from your surface to the screen. You
cannot use SDL_CreateRGBSurfaceFrom to turn a buffer in
memory into a hardware surface.

Hardware surfaces no longer exist in SDL 1.3.

Yeah they do, they’re just called “textures” now. :wink:

…which makes me think. Is there any way to accomplish
this, or at least to simplify it, using
SDL_UpdateTexture?
I wouldn’t describe it as a simplification per-se, but that would be a
simple optimization if it winds up being faster. It looks like you
wouldn’t even need a surface (I had thought that function required
one). Does anyone know how it’s speed compares to
SDL_CreateTextureFromSurface() or SDL_RenderWritePixels() ?