Hardware Surfaces on X11 / nVidia TNT2

Hi. I’m a newbie and I’m trying to develop a biomedical application
(acquires a video image, processes it, and the displays it) using SDL.

I can’t get hardware surfaces. My code so far is very simple:

/* Initialize the SDL library */
  if( SDL_Init(SDL_INIT_VIDEO) < 0 )
  {
    fprintf(stderr, "Couldn't initialize SDL: %s\n",

SDL_GetError());
exit(1);
}

/* Clean up on exit */
  atexit(SDL_Quit);


  screen = SDL_SetVideoMode(1024, 768, 8,

SDL_HWSURFACE|SDL_FULLSCREEN|SDL_HWPALETTE|SDL_DOUBLEBUF);
if ( screen == NULL )
{
fprintf(stderr, “Couldn’t set 1024x768x8 video mode: %s\n”,
SDL_GetError());
exit(1);
}

After this, the only flags that I’m getting are:

SDL_HWPALETTE
SDL_FULLSCREEN

Then, the SDL_Flip() function takes 13 msec, which is way too slow for
my application.

My configuration:

Pentium 4 1.7 GHz - nVidia TNT2 Model 64 (using the latest driver)
Red Hat 8.0 - kernel 2.4.18-24.8.0 - using KDE 3.1.1

I used to do the same thing in Windows, using DirectDraw, and the flip
operation would take ~4msec, so I know it’s possible.

What’s going on?

Thanks!!!

Carlos

Hi. I’m a newbie and I’m trying to develop a biomedical application
(acquires a video image, processes it, and the displays it) using SDL.

I can’t get hardware surfaces. My code so far is very simple:

/* Initialize the SDL library */
  if( SDL_Init(SDL_INIT_VIDEO) < 0 )
  {
    fprintf(stderr, "Couldn't initialize SDL: %s\n",

SDL_GetError());
exit(1);
}

/* Clean up on exit */
  atexit(SDL_Quit);


  screen = SDL_SetVideoMode(1024, 768, 8,

SDL_HWSURFACE|SDL_FULLSCREEN|SDL_HWPALETTE|SDL_DOUBLEBUF);
if ( screen == NULL )
{
fprintf(stderr, “Couldn’t set 1024x768x8 video mode: %s\n”,
SDL_GetError());
exit(1);
}

After this, the only flags that I’m getting are:

SDL_HWPALETTE
SDL_FULLSCREEN

Then, the SDL_Flip() function takes 13 msec, which is way too slow for
my application.

My configuration:

Pentium 4 1.7 GHz - nVidia TNT2 Model 64 (using the latest driver)
Red Hat 8.0 - kernel 2.4.18-24.8.0 - using KDE 3.1.1

Which video drivers are you using? If you are using the default RedHat
drivers you can’t access hardware buffers. If you use the nVidia
drivers, you can use hardware buffers.

The new NVidia linux drivers are exceptional. The installer does all the
work for you. All you have to do is run it.

	Bob PendletonOn Wed, 2003-04-23 at 09:42, Carlos Vrancken wrote:

I used to do the same thing in Windows, using DirectDraw, and the flip
operation would take ~4msec, so I know it’s possible.

What’s going on?

Thanks!!!

Carlos

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

CarlosOn Wed, 2003-04-23 at 10:16, Bob Pendleton wrote:

On Wed, 2003-04-23 at 09:42, Carlos Vrancken wrote:

Hi. I’m a newbie and I’m trying to develop a biomedical application
(acquires a video image, processes it, and the displays it) using SDL.

I can’t get hardware surfaces. My code so far is very simple:

/* Initialize the SDL library */
  if( SDL_Init(SDL_INIT_VIDEO) < 0 )
  {
    fprintf(stderr, "Couldn't initialize SDL: %s\n",

SDL_GetError());
exit(1);
}

/* Clean up on exit */
  atexit(SDL_Quit);


  screen = SDL_SetVideoMode(1024, 768, 8,

SDL_HWSURFACE|SDL_FULLSCREEN|SDL_HWPALETTE|SDL_DOUBLEBUF);
if ( screen == NULL )
{
fprintf(stderr, “Couldn’t set 1024x768x8 video mode: %s\n”,
SDL_GetError());
exit(1);
}

After this, the only flags that I’m getting are:

SDL_HWPALETTE
SDL_FULLSCREEN

Then, the SDL_Flip() function takes 13 msec, which is way too slow for
my application.

My configuration:

Pentium 4 1.7 GHz - nVidia TNT2 Model 64 (using the latest driver)
Red Hat 8.0 - kernel 2.4.18-24.8.0 - using KDE 3.1.1

Which video drivers are you using? If you are using the default RedHat
drivers you can’t access hardware buffers. If you use the nVidia
drivers, you can use hardware buffers.

The new NVidia linux drivers are exceptional. The installer does all the
work for you. All you have to do is run it.

  Bob Pendleton

I used to do the same thing in Windows, using DirectDraw, and the flip
operation would take ~4msec, so I know it’s possible.

What’s going on?

Thanks!!!

Carlos


Carlos Vrancken

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?


Max Watson <@Max_Watson>

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

Make sure you have restarted you X server. You will not get any effect
from the new drivers or the modified XF86Config until you do.

Now, I’m betting you already did that, so there are two ways to test for
the new drivers. Run glxgears, if you get a frame rate in the several
hundred range, you are using the new drivers. Run glxinfo and look at
the vendor string. If it says NVIDIA, you have the right drivers. Or,
look at /var/log/XFree86.* and look for “nvidia”. One of those will work
for you.

	Bob PendletonOn Wed, 2003-04-23 at 10:48, Carlos Vrancken wrote:

Carlos

On Wed, 2003-04-23 at 10:16, Bob Pendleton wrote:

On Wed, 2003-04-23 at 09:42, Carlos Vrancken wrote:

Hi. I’m a newbie and I’m trying to develop a biomedical application
(acquires a video image, processes it, and the displays it) using SDL.

I can’t get hardware surfaces. My code so far is very simple:

/* Initialize the SDL library */
  if( SDL_Init(SDL_INIT_VIDEO) < 0 )
  {
    fprintf(stderr, "Couldn't initialize SDL: %s\n",

SDL_GetError());
exit(1);
}

/* Clean up on exit */
  atexit(SDL_Quit);


  screen = SDL_SetVideoMode(1024, 768, 8,

SDL_HWSURFACE|SDL_FULLSCREEN|SDL_HWPALETTE|SDL_DOUBLEBUF);
if ( screen == NULL )
{
fprintf(stderr, “Couldn’t set 1024x768x8 video mode: %s\n”,
SDL_GetError());
exit(1);
}

After this, the only flags that I’m getting are:

SDL_HWPALETTE
SDL_FULLSCREEN

Then, the SDL_Flip() function takes 13 msec, which is way too slow for
my application.

My configuration:

Pentium 4 1.7 GHz - nVidia TNT2 Model 64 (using the latest driver)
Red Hat 8.0 - kernel 2.4.18-24.8.0 - using KDE 3.1.1

Which video drivers are you using? If you are using the default RedHat
drivers you can’t access hardware buffers. If you use the nVidia
drivers, you can use hardware buffers.

The new NVidia linux drivers are exceptional. The installer does all the
work for you. All you have to do is run it.

  Bob Pendleton

I used to do the same thing in Windows, using DirectDraw, and the flip
operation would take ~4msec, so I know it’s possible.

What’s going on?

Thanks!!!

Carlos

Carlos Vrancken

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Below is the information I get. It looks like I AM using the new driver.
So, what can be happening?

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu Mar
27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat----------------------------------------------------------------------
0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Carlos Vrancken

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

All in all, it is easier to us OpenGL.

	Bob PendletonOn Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

I just found another problem, when I su to root and try to
init_everything I get a message telling me that the user doesn’t own
/tmp/mcop-bob (which has something to do with sound). I just "up graded"
to RH 9.0 and I have never seen this one before. Someone want to give me
a pointer to a clue?

BTW, it hardware buffers work so long as I don’t try to init sound.

	Bob PendletonOn Wed, 2003-04-23 at 14:18, Bob Pendleton wrote:

On Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

All in all, it is easier to us OpenGL.

  Bob Pendleton

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Hi Bob and everybody.
You’re starting to prove that I am a newbie, both with linux and sdl.

Vendor Release Number: 40200000
Version Number: 11

Is that what you asked for?

Regarding the dga driver, would you give me some more details on how to
set that up?

Thanks!

Carlos
PS: is OpenGL really easier? I’m kinda scared of it. My application is
very simple, and I don’t want to spend tons of hours learning how to
setup OpenGL for a simple double buffer operation mode.>From my (KDE) Panel Center, under X-Server, I found this info:

On Wed, 2003-04-23 at 14:18, Bob Pendleton wrote:

On Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

All in all, it is easier to us OpenGL.

  Bob Pendleton

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken


Carlos Vrancken

Hi Bob and everybody.
You’re starting to prove that I am a newbie, both with linux and sdl.

Vendor Release Number: 40200000
Version Number: 11

Is that what you asked for?

Regarding the dga driver, would you give me some more details on how
to set that up?

Just look in the /var/log/XFree86 for DGA, it should already be there.
This link points to the FAQ on the subject, read it.
http://www.libsdl.org/faq.php?action=listentries&category=3#31

Also, be sure to lock your surfaces before you access them and unlock
when you are done.

Thanks!

Carlos
PS: is OpenGL really easier? I’m kinda scared of it. My application is
very simple, and I don’t want to spend tons of hours learning how to
setup OpenGL for a simple double buffer operation mode.

If you don’t know anything about OpenGL, then it is probably not easier.
But, the SDL test programs have nice examples of how to set it up and
how to map an OpenGL window to act like a 2D SDL window. It is well
worth your while to learn enough about OpenGL to use it with SDL. But,
right now doesn’t sound like the time for you to learn OpenGL.

	Bob PendletonOn Wed, 2003-04-23 at 15:09, Carlos Vrancken wrote:

From my (KDE) Panel Center, under X-Server, I found this info:

On Wed, 2003-04-23 at 14:18, Bob Pendleton wrote:

On Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

All in all, it is easier to us OpenGL.

  Bob Pendleton

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken

Carlos Vrancken

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

OK, that was it! I set the SDL_VIDEODRIVER env. var. to ‘dga’ and now
I’m able to use Hardware surfaces… which of course are much faster
(around 7.5 msec for a fullscreen XGA flip).

Is there any way around being root to have access to that capability?

Thank!

‘happy’ Carlos.On Wed, 2003-04-23 at 16:44, Bob Pendleton wrote:

On Wed, 2003-04-23 at 15:09, Carlos Vrancken wrote:

Hi Bob and everybody.
You’re starting to prove that I am a newbie, both with linux and sdl.
From my (KDE) Panel Center, under X-Server, I found this info:

Vendor Release Number: 40200000
Version Number: 11

Is that what you asked for?

Regarding the dga driver, would you give me some more details on how
to set that up?

Just look in the /var/log/XFree86 for DGA, it should already be there.
This link points to the FAQ on the subject, read it.
http://www.libsdl.org/faq.php?action=listentries&category=3#31

Also, be sure to lock your surfaces before you access them and unlock
when you are done.

Thanks!

Carlos
PS: is OpenGL really easier? I’m kinda scared of it. My application is
very simple, and I don’t want to spend tons of hours learning how to
setup OpenGL for a simple double buffer operation mode.

If you don’t know anything about OpenGL, then it is probably not easier.
But, the SDL test programs have nice examples of how to set it up and
how to map an OpenGL window to act like a 2D SDL window. It is well
worth your while to learn enough about OpenGL to use it with SDL. But,
right now doesn’t sound like the time for you to learn OpenGL.

  Bob Pendleton

On Wed, 2003-04-23 at 14:18, Bob Pendleton wrote:

On Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

All in all, it is easier to us OpenGL.

  Bob Pendleton

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken

Carlos Vrancken


Carlos Vrancken

OK, that was it! I set the SDL_VIDEODRIVER env. var. to ‘dga’ and now
I’m able to use Hardware surfaces… which of course are much faster
(around 7.5 msec for a fullscreen XGA flip).

Is there any way around being root to have access to that capability?

As far as I have been able to find out, the answer is no. Either the
user running the program must be root, or the program must be setuid to
root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

If anyone knows another way to get hardware buffers under Linux without
having to be root, please let me know. I’m writing an article on the
subject and I want to be as accurate as possible.

	Bob PendletonOn Thu, 2003-04-24 at 10:46, Carlos Vrancken wrote:

Thank!

‘happy’ Carlos.

On Wed, 2003-04-23 at 16:44, Bob Pendleton wrote:

On Wed, 2003-04-23 at 15:09, Carlos Vrancken wrote:

Hi Bob and everybody.
You’re starting to prove that I am a newbie, both with linux and sdl.
From my (KDE) Panel Center, under X-Server, I found this info:

Vendor Release Number: 40200000
Version Number: 11

Is that what you asked for?

Regarding the dga driver, would you give me some more details on how
to set that up?

Just look in the /var/log/XFree86 for DGA, it should already be there.
This link points to the FAQ on the subject, read it.
http://www.libsdl.org/faq.php?action=listentries&category=3#31

Also, be sure to lock your surfaces before you access them and unlock
when you are done.

Thanks!

Carlos
PS: is OpenGL really easier? I’m kinda scared of it. My application is
very simple, and I don’t want to spend tons of hours learning how to
setup OpenGL for a simple double buffer operation mode.

If you don’t know anything about OpenGL, then it is probably not easier.
But, the SDL test programs have nice examples of how to set it up and
how to map an OpenGL window to act like a 2D SDL window. It is well
worth your while to learn enough about OpenGL to use it with SDL. But,
right now doesn’t sound like the time for you to learn OpenGL.

  Bob Pendleton

On Wed, 2003-04-23 at 14:18, Bob Pendleton wrote:

On Wed, 2003-04-23 at 13:29, Carlos Vrancken wrote:

Below is the information I get. It looks like I AM using the new
driver. So, what can be happening?

You know, that is a good question. I use the NVidia drivers and Redhat
and I am having trouble getting some of my old hwbuffer code working.
What version of X are you using?

AFAIK you have to have the dga driver active in your server. You have to
have the environment variable SDL_VIDEODRIVER set to dga, and you have
to run the code as root.

All in all, it is easier to us OpenGL.

  Bob Pendleton

Thanks a lot!

Carlos

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4349 Thu
Mar 27 19:00:02 PST 2003
GCC version: gcc version 3.2

$ xdpyinfo |grep NV -
NV-CONTROL
NV-GLX
NV-GLX
NVIDIA-GLX

$ glxgears
884 frames in 5.0 seconds = 176.800 FPS

$ glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: RIVA TNT2/AGP/SSE2
OpenGL version string: 1.4.0 NVIDIA 43.49
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_env_add, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_compiled_vertex_array,
GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_stencil_wrap, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_lod_bias,
GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fog_distance, GL_NV_packed_depth_stencil,
GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_SGIS_multitexture
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2a 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2b 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x2c 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2d 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

On Wed, 2003-04-23 at 11:12, Max Watson wrote:

On Wednesday 23 April 2003 10:48, Carlos Vrancken wrote:

Hi Bob. I just did that, downloaded the new drivers and installed them
(and yes, they are VERY easy to installe). I modified the XF86Config
file as suggested, and everything seems to be alright.

Anyways, how can I make sure I’m using the new nvidia driver?

You can check the kernel driver version with:
cat /proc/driver/nvidia/version
and make sure X is using it with:
xdpyinfo | grep NV -
should return NV-CONTROL, NVIDIA-GLX, and maybe some others.


Max Watson


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Carlos Vrancken

Carlos Vrancken

Carlos Vrancken

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

I wrote a patch for the DGA extension which uses the framebuffer console
to get a pointer to the video memory. It works fine, but only if you have
the framebuffer console set up properly, which often isn’t the case.

The patch is attached for your perusal, but I haven’t tested it in a while.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment
-------------- next part --------------
diff -u -r1.14 XF86DGA2.c
— XF86DGA2.c 2000/05/23 04:47:35 1.14
+++ XF86DGA2.c 2000/08/29 15:53:10
@@ -23,6 +23,9 @@
#include “extutil.h”
#include <stdio.h>

+#if defined(linux) /* Needed for framebuffer console support */
+#include <linux/fb.h>
+#endif

/* If you change this, change the Bases[] array below as well */
#define MAX_HEADS 16
@@ -909,7 +912,31 @@
if (!name)
name = DEV_MEM;
if ((pMap->fd = open(name, O_RDWR)) < 0)
+#if defined(linux)

  • { /* /dev/fb0 fallback added by Sam Lantinga */
  •    /* Try to fall back to /dev/fb on Linux - FIXME: verify the device */
    
  •    struct fb_fix_screeninfo finfo;> On Thu, 2003-04-24 at 10:46, Carlos Vrancken wrote:
    

OK, that was it! I set the SDL_VIDEODRIVER env. var. to ‘dga’ and now
I’m able to use Hardware surfaces… which of course are much faster
(around 7.5 msec for a fullscreen XGA flip).

Is there any way around being root to have access to that capability?

As far as I have been able to find out, the answer is no. Either the
user running the program must be root, or the program must be setuid to
root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

If anyone knows another way to get hardware buffers under Linux without
having to be root, please let me know. I’m writing an article on the
subject and I want to be as accurate as possible.

  •    if ((pMap->fd = open("/dev/fb0", O_RDWR)) < 0) {
    
  •        return False;
    
  •    }
    
  •    /* The useable framebuffer console memory may not be the whole
    
  •       framebuffer that X has access to. :-(
    
  •     */
    
  •    if ( ioctl(pMap->fd, FBIOGET_FSCREENINFO, &finfo) < 0 ) {
    
  •        close(pMap->fd);
    
  •        return False;
    
  •    }
    
  •    /* Warning: On PPC, the size and virtual need to be offset by:
    
  •       (((long)finfo.smem_start) -
    
  •       (((long)finfo.smem_start)&~(PAGE_SIZE-1)))
    
  •     */
    
  •    base = 0;
    
  •    size = finfo.smem_len;
    
  • }
    +#else
    return False;
    +#endif
    pMap->virtual = mmap(NULL, size, PROT_READ | PROT_WRITE,
    MAP_FILE | MAP_SHARED, pMap->fd, (off_t)base);
    if (pMap->virtual == (void *)-1)

As far as I have been able to find out, the answer is no. Either the
user running the program must be root, or the program must be setuid to
root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

Pardon my ignorance, but how does OpenGL do it? Could not SDL be
configured (or rewritten?) to do it the same way?

ChrisOn 24 Apr 2003, Bob Pendleton wrote:

Try Shaklee Vitamins: Free Sample! http://www.tryshaklee.com/newstream

As far as I have been able to find out, the answer is no. Either the
user running the program must be root, or the program must be setuid to
root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

Pardon my ignorance, but how does OpenGL do it? Could not SDL be
configured (or rewritten?) to do it the same way?

As I understand it this is a case of different drivers with different
access controls. OpenGL is accessed directly from the application, it
works with X, not through it, so access is a lot like accessing the file
system. OTOH, DGA is built into X and has to have root privileges
because X has to have root permissions. I know this is not an in depth
answer, in fact it could be wrong, but frankly I never cared enough to
look up the details.

Yes, SDL could be written to do all of its work through OpenGL, but then
the core of SDL would not be portable to machines and OSes that don’t
have OpenGL. Look at the list of systems that SDL runs on and you can
see that requiring OpenGL would be a bad idea. OTOH, David Olofson has
written a version of SDL that does its graphics through OpenGL.

	Bob PendletonOn Thu, 2003-04-24 at 15:15, Chris Nystrom wrote:

On 24 Apr 2003, Bob Pendleton wrote:

Chris

Try Shaklee Vitamins: Free Sample! http://www.tryshaklee.com/newstream


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Dnia 2003.04.25 07:52, Bob Pendleton napisa?(a):> On Thu, 2003-04-24 at 15:15, Chris Nystrom wrote:

On 24 Apr 2003, Bob Pendleton wrote:

As far as I have been able to find out, the answer is no. Either
the

user running the program must be root, or the program must be
setuid to

root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

Pardon my ignorance, but how does OpenGL do it? Could not SDL be
configured (or rewritten?) to do it the same way?

As I understand it this is a case of different drivers with different
access controls. OpenGL is accessed directly from the application, it
works with X, not through it, so access is a lot like accessing the
file
system. OTOH, DGA is built into X and has to have root privileges
because X has to have root permissions. I know this is not an in depth
answer, in fact it could be wrong, but frankly I never cared enough to
look up the details.

Yes, SDL could be written to do all of its work through OpenGL, but
then
the core of SDL would not be portable to machines and OSes that don’t
have OpenGL. Look at the list of systems that SDL runs on and you can
see that requiring OpenGL would be a bad idea. OTOH, David Olofson has
written a version of SDL that does its graphics through OpenGL.

Hi

OpenGL isn’t ideal solution becouse it only work correct if you have
access to hardware in card and driver of this card support hw
acceleration of OpenGL. All that I know under Linux only 2 famili of
cards give you this (NVIDA and ATI) via binary drivers.

SDL is only wraper on existing video subsystem in you system (if your
system has more that one video subsystem SDL may have support for all
of them) exp:
Windows output can work via:

  • DirectX - avilable HW Acceleration
  • WinApi - noavilable HW Acceleration (I never try it full)
    Linux has :
  • X - noavilable HW Acceleration (you don’t need root)
    -DGA avilable HW Acceleration (deppend of card but you always
    must be root)
  • GGA (not tested for me but some cards can have HW acceleration depend
    on GGA support)
  • AA (not tested for me)
  • DirectFB (not tested for me but some cards can have HW acceleration
    depend on DirectFB support - Matrox cards).
  • FB console (not tested for me)
  • more…
    All that I know BeOS has only one output

HW Acceleration in SDL depend on local system resources. X can give it
only via DGA and Windows give you it via DirectX. Using OpenGL you use
next graphic subsystem but here system must have support for HW OpenGL
(binary drivers) becouse with SW OpenGL (Mesa) it is much slower that
SDL SW video functions (exp. via X output). Main advantage of OpenGL is
that he is avilable on more that one system (like SDL) and code can be
use on more that one systems (if there is acceleration for this).
OpenGL is accelerated under Windows (SDL too) but under Linux you have
more chance meet acceleration for OpenGL that for SDL. For OpenGL you
don’t need be root.
If your try run OpenGL program on SW OpenGL systems (those without
binary drivers) then this is pure nonsens. You must write to
requirements that program need HW OpenGL support.

Rafal


Ponad 100 tysiecy facetow szukajacych przyjaciolki, kochanki, zony,
dziewczyny… Czekamy! >>> http://link.interia.pl/f170e

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20030425/927efdaa/attachment.txt

The difference is not just between SDL_GL_SwapBuffers() and SDL_Flip().

If you use OpenGL hardware drivers, the video card cpu is doing all the
drawing. Whereas if you use SDL or a software OpenGL driver, the scene
is drawn in memory and then copied (or DMA’ed if your lucky) into the
video card memory, and this is SLOW.

Nowadays I am starting to think that even for 2D one should go the
OpenGL/DirectX route because that is how the video cards now work.

Even MacOS X is now using OpenGL for the GUI.

Someone please correct-me if I am wrong, but it seems that the video
cards no longer support framebuffer modes like GDI and VESA in hardware,
only 3D and basic 2D operations.On Friday 25 April 2003 01:27 pm, Anders Folkesson wrote:

So, basically what you are saying is that it should be alot faster to do a
SDL_GL_SwapBuffers() than a plain SDL_Flip() on machine runnning X
(assuming no DGA here)?

I have been wrestling with this problem for a while, since i found out that
i only get ~40 fps in a very small window on my machine.

Anyway i tried the OGL thing and it ws the same…maybe i dont understand
you all clear, but it is an interesting discussion…

–Anders


Paulo Pinto

So, basically what you are saying is that it should be alot faster to do a
SDL_GL_SwapBuffers() than a plain SDL_Flip() on machine runnning X (assuming no DGA here)?

No DGA, so you have software buffers on X. If you have hardware
accelerated OpenGL, then the SDL_GL_SwapBuffers() should be faster
than X because the SwapBuffers just needs to change a pointer and maybe
wait for vertical retrace while the SDL_Flip is copying the entire image
to the X server and then the server is writing it to the display. BUT,
if the buffer is small enough, it might appear to take the same amount
of time under both systems.

If you do NOT have hardware accelerated OpenGL then the OGL operations
are done in software and have to go the same process that SDL does to
get an image on the screen. The result is the OGL can actually be slower
than SDL.

There are many variables and so for some tests one will be faster while
for other tests the other technique will win.

	Bob PendletonOn Fri, 2003-04-25 at 06:27, Anders Folkesson wrote:

I have been wrestling with this problem for a while, since i found out that i only get ~40 fps in a very small window on my machine.

Anyway i tried the OGL thing and it ws the same…maybe i dont understand you all clear, but it is an interesting discussion…

–Anders

As I understand it this is a case of different drivers with different
access controls. OpenGL is accessed directly from the application, it
works with X, not through it, so access is a lot like accessing the file
system. OTOH, DGA is built into X and has to have root privileges
because X has to have root permissions. I know this is not an in depth
answer, in fact it could be wrong, but frankly I never cared enough to
look up the details.

Yes, SDL could be written to do all of its work through OpenGL, but then
the core of SDL would not be portable to machines and OSes that don’t
have OpenGL. Look at the list of systems that SDL runs on and you can
see that requiring OpenGL would be a bad idea. OTOH, David Olofson has
written a version of SDL that does its graphics through OpenGL.

Bob Pendleton

Chris

Try Shaklee Vitamins: Free Sample! http://www.tryshaklee.com/newstream


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Get your free web-based e-mail account from http://www.Math.net
Your online tourguide of Mathematics, with books, links, news,
message boards, and much more!


Select your own custom email address for FREE! Get you at yourchoice.com w/No Ads, 6MB, POP & more! http://www.everyone.net/selectmail?campaign=tag


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

As far as I have been able to find out, the answer is no. Either the
user running the program must be root, or the program must be setuid to
root. The requirement to be root to use hardware buffers on Linux
seriously reduces the usefulness of hardware buffers on Linux. Yet
another reason to use OpenGL. :slight_smile:

A little off-topic, but just for completeness (and due to ignorance),
how do I setuid my program to root?

Thanks

Carlos

Carlos Vrancken On Thu, 2003-04-24 at 12:59, Bob Pendleton wrote: