Hi folks,
I would like have Your advice/comments/help on the following topic.
I developing a “simple” video-viewing application on a linux box.
The requiremets of the application SW are as follows
- display a PAL- live - video (provided by a video camera) hooked up to the
framegrabber (via composite signal)
on the desktop in 25 Hz - overlay an animated 3D graphics (HUD-Style, tunnel) on top of the video-image
The problem is:
- lousy performance##################################################################
The hardware consists of
- a (cutomer-furnished) PC
- Gfx Card: ???
- CPU: ???
- a PCMCIA framme-grabber from HASOTEC (FG33)
The OS/driver SW is as follows
-
SuSe 10.0
unmame -r:
2.6.13-15-default -
Gfx-Capability:
glxinfo:
direct rendering: Yes
…
OpenGL vendor string: Tungsten Graphics, Inc
OpenGL renderer string: Mesa DRI Intel(R) 852GM/855GM 20050225 x86/MMX/SSE2
OpenGL version string: 1.3 Mesa 6.2.1
… -
a video4linux kernel module for the framegabber provided by HASOTEC
lsmod | grep fg:
fg3xv4l2 58624 0
videodev 9088 1 fg3xv4l2
So I produced an application with the following features
- A Qt Framework with only a Glwidget inside
- continously grabbing an image using a C+±class wrapping v4l-typical access
to the video device
- display the image in a QGl-widget
- draw3D overlay
##################################################################
A very condensed overview on the SW is as follows
// initialize() opens the video Input and sets up a texture to hold the grabbed
image. The size is 1024*1024 to complient with the s^n
//requirement
#define TEXTURE_WIDTH 1024
#define TEXTURE_HEIGHT 1024
void SystDisp_Video::initialize()
{
video = new videoInput(“/dev/video0”);
void * texdata = calloc(1,TEXTURE_WIDTH * TEXTURE_HEIGHT * 3);
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_WIDTH, TEXTURE_HEIGHT,
0, GL_BGR, GL_UNSIGNED_BYTE, texdata);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
glMatrixMode(GL_TEXTURE);
glScalef((float)video->width() / TEXTURE_WIDTH,
(float)video->height() / TEXTURE_HEIGHT, 1);
glMatrixMode(GL_MODELVIEW);
}
// paint() is basically called by a Tt-timmer set up with 20 ms
// grab_video() blacks until a new image is received
// If texture is enabled the video_image is drawn as texture. In that case the
texture is redefined using “glTexSubImage2D()”
// If texture is not defined the image is painted using “glDrawPixels()”
void SystDisp_Video::paint()
{
video->grab_video( );
glLineWidth( 1.0 );
glColor4f(1.0, 1.0, 1.0, 1.0);
glPushMatrix();
{
if(texture)
{
//// TEXTURE
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, video->width(), video->height(),
GL_BGR, GL_UNSIGNED_BYTE, video->currentFrame());
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 1.0f); glVertex2f(10.f, 10.f); // Bottom Left Of The
Texture and Quad
glTexCoord2f(1.0f, 1.0f); glVertex2f( win_w-10, 10.f); // Bottom Right Of The
Texture and Quad
glTexCoord2f(1.0f, 0.0f); glVertex2f( win_w-10, win_h-10); // Top Right Of
The Texture and Quad
glTexCoord2f(0.0f, 0.0f); glVertex2f(10.f, win_h-10); // Top Left Of The
Texture and Quad
glEnd();
//////// PIXEL
}
else
{
glPixelZoom(1.0, -1.0);
glRasterPos2i(0, win_h-1);
glDrawPixels(video->width(), video->height(), GL_BGR,
GL_UNSIGNED_BYTE,video->currentFrame());
}
}
glPopMatrix();
}
##################################################################
The problems are as follows:
The application runs in both variants in only roughly 10 Hz. Commenting out
“glTexSubImage2D()” results in nice and firm 25 Hz (according
to the update rate of the frame grabber).
The framegrabber driver comes with a demo application using v4l and SDL, which
runs with the desired 25 Hz
init()
{
...
g_screenSurface = SDL_SetVideoMode( 768, 576, 16, SDL_HWSURFACE |
SDL_DOUBLEBUF );
g_videoSurface = SDL_CreateRGBSurfaceFrom(currentframe(), 768, 576, 16,
768*2,0x000000,0x000000,0x000000,0 );
…
}
render()
{
...
SDL_BlitSurface( g_videoSurface, &r, g_screenSurface, &r );
...
}
So my questions are:
-
Is there anything I can do to speed up my QT/OpenGL Applikation. How about the
texture issue? All the time is eaten up in “glTexSubImage2D”. Is there an
issue wizh the parameters ? -
I do not necessarly need fancy texture mapping. Is there a more direct way to
bring it to the frame buffer, but still have the possibility to combine it with
a 3D-Overlay -
Is there a more perfomant Qt-only way (no OpenGl) to have the video and the
3D-overlay? -
What is SDL so much better performing than Qt/OpenGL. What is the aproprate
OpenGL Way to have the same “BlitSurface”-functionality? -
I would have no objections to do the application in SDL, but I have no clue,
how to do the
3D-Overlay in SDL.
Any help is welcome
Peter