Integrating and using ffmpeg or something else to play movies

Hi all,

I’m wanting to incorporate a movie player kind of thing to play some transition
movies within the game I’m writing.

Basically, all it needs to do is just keep playing the frames until the end or
certain events (e.g. keypress or mouse button) occur, at which point the
playback ends and the game moves on to the next stage.

I’ve had a look at ffmpeg which says that it is compatible with SDL, but I’m
having difficulty patching the two together to get this to work. The framework
is a class called SplashScreen (which will be used for other things, too, so the
name is not so good), and a particular event loop which calls a RenderFrame
method of the SplashScreen object. What I’d like to do is to decode a frame
within the RenderFrame method, and blit it to the screen buffer passed in.

If anyone has had any success doing this kind of thing, can you give me some
pointers on what to call from within RenderFrame? Is there a better library than
ffmpeg for this?

TIA,
-J

I hope this will answer your question: it’s a link to a really great
tutorial (the section on decoding the video and drawing it to an
SDL_Surface), and is basically what I used when getting started with
ffmpeg.

Unfortunately I can’t help you more directly, because I use OpenGL
for the video in all my projects, and SDL only for well, everything
except the video. :slight_smile: So my program decodes the frame and uploads it
to a texture; and I have never used SDL for drawing so I don’t even
know how it works! >.>

Hopefully, the tutorial can help you out, or at least point you in
the right direction.

– ScottOn 2007/08/19, at 22:51, Jason wrote:

Hi all,

I’m wanting to incorporate a movie player kind of thing to play
some transition
movies within the game I’m writing.

Basically, all it needs to do is just keep playing the frames until
the end or
certain events (e.g. keypress or mouse button) occur, at which
point the
playback ends and the game moves on to the next stage.

I’ve had a look at ffmpeg which says that it is compatible with
SDL, but I’m
having difficulty patching the two together to get this to work.
The framework
is a class called SplashScreen (which will be used for other
things, too, so the
name is not so good), and a particular event loop which calls a
RenderFrame
method of the SplashScreen object. What I’d like to do is to decode
a frame
within the RenderFrame method, and blit it to the screen buffer
passed in.

If anyone has had any success doing this kind of thing, can you
give me some
pointers on what to call from within RenderFrame? Is there a better
library than
ffmpeg for this?

TIA,
-J


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Scott,

Do you have a link to the example using OpenGL and textures?

thanks,
johnOn 8/20/07, Scott Harper wrote:

I hope this will answer your question: it’s a link to a really great
tutorial (the section on decoding the video and drawing it to an
SDL_Surface), and is basically what I used when getting started with
ffmpeg.

Unfortunately I can’t help you more directly, because I use OpenGL
for the video in all my projects, and SDL only for well, everything
except the video. :slight_smile: So my program decodes the frame and uploads it
to a texture; and I have never used SDL for drawing so I don’t even
know how it works! >.>

Hopefully, the tutorial can help you out, or at least point you in
the right direction.

– Scott

On 2007/08/19, at 22:51, Jason wrote:

Hi all,

I’m wanting to incorporate a movie player kind of thing to play
some transition
movies within the game I’m writing.

Basically, all it needs to do is just keep playing the frames until
the end or
certain events (e.g. keypress or mouse button) occur, at which
point the
playback ends and the game moves on to the next stage.

I’ve had a look at ffmpeg which says that it is compatible with
SDL, but I’m
having difficulty patching the two together to get this to work.
The framework
is a class called SplashScreen (which will be used for other
things, too, so the
name is not so good), and a particular event loop which calls a
RenderFrame
method of the SplashScreen object. What I’d like to do is to decode
a frame
within the RenderFrame method, and blit it to the screen buffer
passed in.

If anyone has had any success doing this kind of thing, can you
give me some
pointers on what to call from within RenderFrame? Is there a better
library than
ffmpeg for this?

TIA,
-J


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Oy… I only have my source files, and I’m not sure how well to make
it understandable without putting too much text on the mailing list.
There really isn’t a tutorial that I know of for using the OpenGL.

Actually, what you can do is once you’ve gotten the image data
decoded (as in the tutorial I linked to before):

avcodec_decode_video( mVideoCodecCtx, mVideoFrameYUV, &frameFinished,
pkt.data, pkt.size );

I convert each frame to RGB (they’re decoded to YUV for every video
format I’ve used) like so:

sws_scale( mSwsCtx, mVideoFrameYUV->data, mVideoFrameYUV->linesize,
0, mVideoCodecCtx->height, mVideoFrameRGB->data, mVideoFrameRGB-

linesize );

(This code is copied from my project, so I apologize if the variable
names aren’t helpful. I try my best, and if you check the ffmpeg
Doxygen for the functions you can probably figure them out. If not
let me know off-list and I’ll see if I can go into greater depth then.)

Then I use the mVideoFrameRGB->date value to upload as a sub-texture
to OpenGL using this function:

void VE_gcnMoviePane::setData( GLuint *d ) {
glBindTexture(GL_TEXTURE_2D, mTexNumber);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,
mSubTexWidth, mSubTexHeight,
GL_RGB,
GL_UNSIGNED_BYTE, d);
}

Does that help? Anyway, we’ve lost the topic (SDL), so if you don’t
get it from this, let me know off-list. :slight_smile:

It seems like there should already be some kind of library that uses
ffmpeg to decode video to SDL… But maybe it’s simple enough that
wrapper libraries would just be excess overhead?

– ScottOn 2007/08/20, at 14:39, John M. wrote:

Scott,

Do you have a link to the example using OpenGL and textures?

thanks,
john

On 8/20/07, Scott Harper <@Scott_Harper> wrote:

I hope this will answer your question: it’s a link to a really great
tutorial (the section on decoding the video and drawing it to an
SDL_Surface), and is basically what I used when getting started with
ffmpeg.

Unfortunately I can’t help you more directly, because I use OpenGL
for the video in all my projects, and SDL only for well, everything
except the video. :slight_smile: So my program decodes the frame and uploads it
to a texture; and I have never used SDL for drawing so I don’t even
know how it works! >.>

Hopefully, the tutorial can help you out, or at least point you in
the right direction.

– Scott

On 2007/08/19, at 22:51, Jason wrote:

Hi all,

I’m wanting to incorporate a movie player kind of thing to play
some transition
movies within the game I’m writing.

…snip…

If anyone has had any success doing this kind of thing, can you
give me some
pointers on what to call from within RenderFrame? Is there a better
library than
ffmpeg for this?

TIA,
-J

Scott Harper <orcein gmail.com> writes:

G’day Scott,

Do you have that link?

Ta,
-J

El Martes 21 Agosto 2007, Scott Harper escribi?:

Oy… ?I only have my source files, and I’m not sure how well to make ?
it understandable without putting too much text on the mailing list. ?
There really isn’t a tutorial that I know of for using the OpenGL.

Scott, you forgot to put the link! :slight_smile:

Oy… I only have my source files, and I’m not sure how well to make
it understandable without putting too much text on the mailing list.
There really isn’t a tutorial that I know of for using the OpenGL.

Actually, what you can do is once you’ve gotten the image data
decoded (as in the tutorial I linked to before):

As others stated you didn’t add the link. :slight_smile:

avcodec_decode_video( mVideoCodecCtx, mVideoFrameYUV, &frameFinished,
pkt.data, pkt.size );

I convert each frame to RGB (they’re decoded to YUV for every video
format I’ve used) like so:

It would be a good addition to use GLSL version if the underlying HW
supports it. After all fragment shader can do the YUV to RGB conversion
for you. Saves some CPU cycles to the decoding etc.

If you want to support multiple video formats you can use a YUV to YUV
conversion with CPU. This is much lighter than the full YUV to RGB
conversion, and you can then use a single YUV format in the fragment
shader.

sws_scale( mSwsCtx, mVideoFrameYUV->data, mVideoFrameYUV->linesize,
0, mVideoCodecCtx->height, mVideoFrameRGB->data, mVideoFrameRGB-

linesize );

(This code is copied from my project, so I apologize if the variable
names aren’t helpful. I try my best, and if you check the ffmpeg
Doxygen for the functions you can probably figure them out. If not
let me know off-list and I’ll see if I can go into greater depth
then.)

Then I use the mVideoFrameRGB->date value to upload as a sub-texture
to OpenGL using this function:

void VE_gcnMoviePane::setData( GLuint *d ) {
glBindTexture(GL_TEXTURE_2D, mTexNumber);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,
mSubTexWidth, mSubTexHeight,
GL_RGB,
GL_UNSIGNED_BYTE, d);
}

Does that help? Anyway, we’ve lost the topic (SDL), so if you don’t
get it from this, let me know off-list. :slight_smile:

It seems like there should already be some kind of library that uses
ffmpeg to decode video to SDL… But maybe it’s simple enough that
wrapper libraries would just be excess overhead?

Well ffmepg really isn’t the most easiest one to learn so maybe the
reason is there… Well most likely it is, because making a universal
SDL_Video is quite hard to come by. After all there are endless ways to
use the decoded frames and one simply can’t support all of them with
ease. Then there is the AV-sync problems wheter to support subtitles
etc. I designed such a library about year a go, but I was too busy to
even really start experimenting with my ideas.On Tuesday 21 August 2007, Scott Harper wrote:

El Martes 21 Agosto 2007, Scott Harper escribi?:

Oy… I only have my source files, and I’m not sure how well to make
it understandable without putting too much text on the mailing list.
There really isn’t a tutorial that I know of for using the OpenGL.

Scott, you forgot to put the link! :slight_smile:

Wow… that’s very embarrassing… I’m so sorry!

Here is is for real this time:

http://www.dranger.com/ffmpeg/tutorial02.html

I’m used to forgetting to attach a document, but forgetting to paste
a link, that’s new for me!

Sorry, folks.
– ScottOn 2007/08/21, at 4:50, Alberto Luaces wrote:

It would be a good addition to use GLSL version if the underlying HW
supports it. After all fragment shader can do the YUV to RGB
conversion
for you. Saves some CPU cycles to the decoding etc.

If you want to support multiple video formats you can use a YUV to YUV
conversion with CPU. This is much lighter than the full YUV to RGB
conversion, and you can then use a single YUV format in the fragment
shader.

Hey, that’s a darn fine idea there! The project is actually a video
editor, so I was planning on implementing GLSL at some point for
filter effects (like you said, why do it on the processor when you
can offload it to the video card?), but the YUV-RGB conversion,
that’s a great suggestion! I’m a little behind in development to get
that going QUITE yet (and I’ve not done anything with GLSL yet, so
there’s that learning project as well), but it’s going on my list of
things to try!

…snip…

It seems like there should already be some kind of library that uses
ffmpeg to decode video to SDL… But maybe it’s simple enough that
wrapper libraries would just be excess overhead?

Well ffmepg really isn’t the most easiest one to learn so maybe the
reason is there… Well most likely it is, because making a universal
SDL_Video is quite hard to come by. After all there are endless
ways to
use the decoded frames and one simply can’t support all of them with
ease. Then there is the AV-sync problems wheter to support subtitles
etc. I designed such a library about year a go, but I was too busy to
even really start experimenting with my ideas.

Indeed, the syncing issue is one that’s got me boggled on my project
just now, actually. There’s just so many possibilities. Right now
I’m keeping track of the audio-track’s play-through ratio (how far
through the video is it) and only updating the video to the next
frame when the audio equals/surpasses the ratio of that video frame.
I’m not entirely sure if that’s an okay way to handle it, however,
since it’s so much simpler than how the SDL demo program (ffplay, I
think) handles the a/v sync issue.

Also, I’ve no experience doing I/O in C/C++ (I learned to program
with Java, you see, and have taught myself C/C++ more or less on my
own, and this project is the first time I’ve needed to read and write
data), which is another bottleneck I’ve hit. When I get some free
time I’ll need to find some tutorials and follow some suggestions I
was given a while back.

– ScottOn 2007/08/21, at 7:23, Sami N??t?nen wrote:

Don’t worry, your great tutorial deserved waiting for it! Thank you very much!

Have you looked at OpenML ( http://www.khronos.org/openml/ )? It’s not
something I’d really looked at until recently but Linux Magazine Issue
82 has an article on it. Not only does it make handling I/O and
synchronisation a lot easier, it’s also a superset of OpenGL with OpenGL
extensions specifically designed for media handling.

Not sure if it really integrates with SDL mind, but it might be worth a
look.On Tue, 2007-08-21 at 14:48 -0600, Scott Harper wrote:

Also, I’ve no experience doing I/O in C/C++ (I learned to program
with Java, you see, and have taught myself C/C++ more or less on my
own, and this project is the first time I’ve needed to read and write
data), which is another bottleneck I’ve hit. When I get some free
time I’ll need to find some tutorials and follow some suggestions I
was given a while back.


Try the all-new Yahoo! Mail. “The New Version is radically easier to use” ? The Wall Street Journal
http://uk.docs.yahoo.com/nowyoucan.html

Scott Harper schrieb:> On 2007/08/21, at 7:23, Sami N??t?nen wrote:

It would be a good addition to use GLSL version if the underlying HW
supports it. After all fragment shader can do the YUV to RGB
conversion
for you. Saves some CPU cycles to the decoding etc.

If you want to support multiple video formats you can use a YUV to YUV
conversion with CPU. This is much lighter than the full YUV to RGB
conversion, and you can then use a single YUV format in the fragment
shader.

Hey, that’s a darn fine idea there! The project is actually a video
editor, so I was planning on implementing GLSL at some point for
filter effects (like you said, why do it on the processor when you
can offload it to the video card?), but the YUV-RGB conversion,
that’s a great suggestion! I’m a little behind in development to get
that going QUITE yet (and I’ve not done anything with GLSL yet, so
there’s that learning project as well), but it’s going on my list of
things to try!

There’s a functioning video decoder based on ffmpeg, OpenGL and SDL in
libavg. It uses shaders to do exactly what you’re proposing, so if
you’re willing to put your code under LGPL, feel free to rip out the
shader. Have a look here:

https://www.libavg.de/websvn/filedetails.php?repname=libavg&path=%2Ftrunk%2Flibavg%2Fsrc%2Fplayer%2FSDLDisplayEngine.cpp&rev=0&sc=0

In particular, the checkYCbCrSupport() function sets up the shader. It’s
actually a very simple program that just takes three textures for y, u
and v and does a matrix multiplication to generate rgb colors from the
textures.

There’s other nice things too, like support for mesa and apple
extensions for YCbCr (aka YUV) textures in the same source file.

Cheers,

Uli

Ulrich von Zadow | +49-172-7872715 | cocacoder at jabber.berlin.ccc.de

Den Mon, 20 Aug 2007 04:51:33 +0000 (UTC)
skrev Jason <jesus_freak at picknowl.com.au>:
[…]

If anyone has had any success doing this kind of thing, can you give
me some pointers on what to call from within RenderFrame? Is there a
better library than ffmpeg for this?

Depends on your needs. ffmpeg is great and supports a lot of different
formats, but if all you want to do is play video within your game, and
the format of the video is up to you, you might be better off with
something like eg Ogg Theora in stead http://theora.org/. It’s not a
big additional dependency, specially not if you’re already using Ogg
Vorbis for audio, it’s got a BSD-like license, and there’s no patent
issues to worry about. The Theora reference implementation is still in
the alpha stage, but according to their FAQ the format is frozen, and
the library itself seems to work fine. The game DROD: The City Beneath
uses Ogg Theora video without any issues. (DROD also uses SDL, and the
source code is available from the CaravelNet forum if you’re interested,
MPL-1.1 license. See http://www.caravelgames.com/).

  • Gerry