Sdl 1.3 and video libraries like vlc

hi all
i’m following the tutorial to render a video surface inside a sdl
surface as showed here http://wiki.videolan.org/LibVLC_SampleCode_SDL
i wanted to update that tutorial to use sdl 1.3 accelerated rendering,
but since sdl_textures don’t have -> pixel field, i resorted to using
SDL_CreateTextureFromSurface() and then render it…
sample code (somewhere instead of SDL_Blit)!

    SDL_Texture *tex = SDL_CreateTextureFromSurface(renderer, ctx.surf);
    SDL_RenderCopy(renderer1, tex, NULL, NULL);
    SDL_RenderPresent(renderer);

i was wondering, is this efficient? since nowadays decoding is done on
the gpu wouldn’t it be possible to keep the video surface and the sdl
texture in video memory?
Or i got this wrong and there is no loss during memory transfer?
Vittorio

I was actually playing with this last night. Check out the updated
test/testoverlay2.c, which does what you want.

Basically you create a streaming YUV texture, and then either lock and
write to it each frame, or update it each frame with your video
output. Locking is preferable if your video output supports it, but
if not then texture update is fine.

See ya!On Mon, Feb 7, 2011 at 2:03 PM, Vittorio G. <vitto.giova at yahoo.it> wrote:

hi all
i’m following the tutorial to render a video surface inside a sdl
surface as showed here http://wiki.videolan.org/LibVLC_SampleCode_SDL
i wanted to update that tutorial to use sdl 1.3 accelerated rendering,
but since sdl_textures don’t have -> pixel field, i resorted to using
SDL_CreateTextureFromSurface() and then render it…
sample code (somewhere instead of SDL_Blit)!

? ? ? ?SDL_Texture *tex = SDL_CreateTextureFromSurface(renderer, ctx.surf);
? ? ? ?SDL_RenderCopy(renderer1, tex, NULL, NULL);
? ? ? ?SDL_RenderPresent(renderer);

i was wondering, is this efficient? since nowadays decoding is done on
the gpu wouldn’t it be possible to keep the video surface and the sdl
texture in video memory?
Or i got this wrong and there is no loss during memory transfer?
Vittorio


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


? ? -Sam Lantinga, Founder and CEO, Galaxy Gameworks

thanks for the tip! I was able to reduce cpu usage from 60% to 20%
using the UpdateTexture functions
i’m now trying to remove that remaining 20% which is due to the video
functions moving decoded data from the gpu (as it uses video
acceleration) to primary memory.
here is some code

static void *lock (void *data, void **p_pixels) {
struct ctx *ctx = data;

SDL_LockMutex(ctx->mutex);
SDL_LockSurface(ctx->surf);
*p_pixels = ctx->surf->pixels;

return NULL; /* picture identifier, not needed here */

}

static void unlock (void *data, void *id, void *const *p_pixels) {
struct ctx *ctx = data;

SDL_UnlockSurface(ctx->surf);
SDL_UnlockMutex(ctx->mutex);

assert(id == NULL); /* picture identifier, not needed here */

}

// VLC wants to display the video
static void display (void *data, void *id) {
struct ctx *ctx = data;

SDL_UpdateTexture(ctx->tex[i], NULL, ctx->surf->pixels, 1920*4);

assert(id == NULL);

}

as you can see there is still this slow data moving *p_pixels =
ctx->surf->pixels;
how can i grab the decoded video without the performance hit?
is there some sdl function that might help me? if not, any suggestions?
is my reasoning wrong and it’s normal that using some sdl renderer
functions causes cpu execution?

thanks
VittorioOn Mon, Feb 7, 2011 at 11:19 PM, Sam Lantinga wrote:

I was actually playing with this last night. ?Check out the updated
test/testoverlay2.c, which does what you want.

Basically you create a streaming YUV texture, and then either lock and
write to it each frame, or update it each frame with your video
output. ?Locking is preferable if your video output supports it, but
if not then texture update is fine.

See ya!

On Mon, Feb 7, 2011 at 2:03 PM, Vittorio G. <vitto.giova at yahoo.it> wrote:

hi all
i’m following the tutorial to render a video surface inside a sdl
surface as showed here http://wiki.videolan.org/LibVLC_SampleCode_SDL
i wanted to update that tutorial to use sdl 1.3 accelerated rendering,
but since sdl_textures don’t have -> pixel field, i resorted to using
SDL_CreateTextureFromSurface() and then render it…
sample code (somewhere instead of SDL_Blit)!

? ? ? ?SDL_Texture *tex = SDL_CreateTextureFromSurface(renderer, ctx.surf);
? ? ? ?SDL_RenderCopy(renderer1, tex, NULL, NULL);
? ? ? ?SDL_RenderPresent(renderer);

i was wondering, is this efficient? since nowadays decoding is done on
the gpu wouldn’t it be possible to keep the video surface and the sdl
texture in video memory?
Or i got this wrong and there is no loss during memory transfer?
Vittorio


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


? ? -Sam Lantinga, Founder and CEO, Galaxy Gameworks


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org