FSAA and NVidia

I’m using an NVidia GeForce 2 card under Linux with a recent NVidia
driver. I was playing around with the new support for fsaa and quickly
noticed that while NVidia supports fsaa it does not support the
multi-sample extension. You have to set an environment variable to get
fsaa. __GL_FSAA_MODE = 4 works on most NVidia cards.

Can someone tell me, or point me to, information about which video
card/OS combinations support multi-sampling. And, can someone tell me if
other video cards require setting an environment variable or some other
tweaking to turn on fsaa?

I would like to build a solution a solution that works across the
majority of OSes and video cards.

	Thanks

		Bob Pendleton-- 

±----------------------------------+

I’m using an NVidia GeForce 2 card under Linux with a recent NVidia
driver. I was playing around with the new support for fsaa and quickly
noticed that while NVidia supports fsaa it does not support the
multi-sample extension. You have to set an environment variable to get
fsaa. __GL_FSAA_MODE = 4 works on most NVidia cards.

It works for me on the latest NVidia drivers for Linux, with both a
GeForce 2 MX and a GeForce 3.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Bob Pendleton wrote:

I’m using an NVidia GeForce 2 card under Linux with a recent NVidia
driver. I was playing around with the new support for fsaa and quickly
noticed that while NVidia supports fsaa it does not support the
multi-sample extension. You have to set an environment variable to get
fsaa. __GL_FSAA_MODE = 4 works on most NVidia cards.

I use a geforce ti 4200 with 44.96 (aka 1.0.4496) drivers and I didn’t
have to change anything for FSAA to work (and I checked __GL_FSAA_MODE
and it’s not defined). As I understand it, this flag forces FSAA under
every opengl application (after a quick test, this is the case for me).

Can someone tell me, or point me to, information about which video
card/OS combinations support multi-sampling.

FSAA also depends on other values like RGB and depth buffer bits. I
already had a similar problem with the stencil buffer : with some video
modes (on my TNT2 at least) it is supported and accelerated, but with
some others, it’s emulated in software (!). I ended up avoiding the
stencil buffer as much as I could for this reason.
The glxinfo program can list all available video modes and their
capabilities. There is a “ms” row which tells which modes support
multisample.
Maybe we can compare glxinfo outputs from different cards and try to
extrapolate from this ? Or look at the glxinfo source code to understand
how it detects which modes support it ?

And, can someone tell me if
other video cards require setting an environment variable or some other
tweaking to turn on fsaa?

I would like to build a solution a solution that works across the
majority of OSes and video cards.

I’d be very interested if you find one !

Stephane

I’m using an NVidia GeForce 2 card under Linux with a recent NVidia
driver. I was playing around with the new support for fsaa and quickly
noticed that while NVidia supports fsaa it does not support the
multi-sample extension. You have to set an environment variable to get
fsaa. __GL_FSAA_MODE = 4 works on most NVidia cards.

It works for me on the latest NVidia drivers for Linux, with both a
GeForce 2 MX and a GeForce 3.

When you run glxinfo does it list the multisample extension? Could you
send me a copy of the outpt of glxinfo on the GeForce 2 MX?

	Bob PendletonOn Tue, 2003-09-02 at 13:31, Sam Lantinga wrote:

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

Yeah, that is the thing. when I set __GL_FSAA_MODE = 4 I get fsaa. But,
glxinfo shows no video modes that support ms. Which is why I asked the
question in the first place. Here is the output from glxinfo on my
system.

name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.3
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer
client glx vendor string: NVIDIA Corporation
client glx version string: 1.3
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_SGIX_swap_group, GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_NV_float_buffer
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce2 MX/AGP/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 44.96
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_point_parameters,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_window_pos, GL_S3_s3tc, GL_EXT_abgr,
GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_compiled_vertex_array, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels,
GL_EXT_paletted_texture, GL_EXT_point_parameters,
GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_shared_texture_palette, GL_EXT_stencil_wrap,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region,
GL_NV_blend_square,
GL_NV_fence, GL_NV_fog_distance, GL_NV_light_max_exponent,
GL_NV_packed_depth_stencil, GL_NV_pixel_data_range,
GL_NV_point_sprite,
GL_NV_register_combiners, GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_NV_texture_rectangle,
GL_NV_vertex_array_range, GL_NV_vertex_array_range2,
GL_NV_vertex_program,
GL_NV_vertex_program1_1, GL_NVX_ycrcb, GL_SGIS_generate_mipmap,
GL_SGIS_multitexture, GL_SGIS_texture_lod
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eatOn Tue, 2003-09-02 at 14:03, Stephane Marchesin wrote:

Bob Pendleton wrote:

I’m using an NVidia GeForce 2 card under Linux with a recent NVidia
driver. I was playing around with the new support for fsaa and quickly
noticed that while NVidia supports fsaa it does not support the
multi-sample extension. You have to set an environment variable to get
fsaa. __GL_FSAA_MODE = 4 works on most NVidia cards.

I use a geforce ti 4200 with 44.96 (aka 1.0.4496) drivers and I didn’t
have to change anything for FSAA to work (and I checked __GL_FSAA_MODE
and it’s not defined). As I understand it, this flag forces FSAA under
every opengl application (after a quick test, this is the case for me).

Can someone tell me, or point me to, information about which video
card/OS combinations support multi-sampling.

FSAA also depends on other values like RGB and depth buffer bits. I
already had a similar problem with the stencil buffer : with some video
modes (on my TNT2 at least) it is supported and accelerated, but with
some others, it’s emulated in software (!). I ended up avoiding the
stencil buffer as much as I could for this reason.
The glxinfo program can list all available video modes and their
capabilities. There is a “ms” row which tells which modes support
multisample.
Maybe we can compare glxinfo outputs from different cards and try to
extrapolate from this ? Or look at the glxinfo source code to understand
how it detects which modes support it ?


0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x2a 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2b 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2c 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2d 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x31 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x32 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x33 24 dc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x34 24 dc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x35 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x36 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x37 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x38 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

And, can someone tell me if
other video cards require setting an environment variable or some other
tweaking to turn on fsaa?

I would like to build a solution a solution that works across the
majority of OSes and video cards.

I’d be very interested if you find one !

Stephane

	Bob Pendleton

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

Bob Pendleton wrote:

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x2a 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2b 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2c 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2d 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x31 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x32 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x33 24 dc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x34 24 dc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x35 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x36 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x37 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x38 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

Here is what I get :

[snip]

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce4 Ti 4200 with AGP8X/AGP/SSE/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 44.96

[snip]

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat----------------------------------------------------------------------
0x21 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x22 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x23 16 tc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x24 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x25 16 tc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x26 16 tc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x27 16 tc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x28 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x29 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x2a 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x2b 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon
0x2c 16 dc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x2d 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2e 16 dc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2f 16 dc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x30 16 dc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x31 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x32 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x33 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x34 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon

Quite strange, I have the same drivers (44.96) and the multisample
buffers/number of samples are listed fine.

Stephane

I’m starting to smell a memory size problem. My default depth is 24 bits
and all my visual depths are 24 bits. Yours are all 16 bits. That means
you are using half the memory that I am. There is a good chance that if
I changed my default depth to 16 it would work.

	Bob PendletonOn Tue, 2003-09-02 at 16:09, Stephane Marchesin wrote:

Bob Pendleton wrote:

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x2a 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2b 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2c 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2d 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x31 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x32 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x33 24 dc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x34 24 dc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x35 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x36 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x37 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x38 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

Here is what I get :

[snip]

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce4 Ti 4200 with AGP8X/AGP/SSE/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 44.96

[snip]

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x22 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x23 16 tc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x24 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x25 16 tc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x26 16 tc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x27 16 tc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x28 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x29 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x2a 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x2b 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon
0x2c 16 dc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x2d 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2e 16 dc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2f 16 dc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x30 16 dc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x31 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x32 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x33 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x34 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon

Quite strange, I have the same drivers (44.96) and the multisample
buffers/number of samples are listed fine.

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

I’m starting to smell a memory size problem. My default depth is 24 bits
and all my visual depths are 24 bits. Yours are all 16 bits. That means
you are using half the memory that I am. There is a good chance that if
I changed my default depth to 16 it would work.

Nope, I tried setting my default depth to 16 and even reduce my
resolution. Still no sample buffers. I still get fsaa when I set the
__GL_FSAA_MODE=4

Are you folks sure you don’t have that flag set in the shell that starts
X?On Tue, 2003-09-02 at 17:40, Bob Pendleton wrote:

On Tue, 2003-09-02 at 16:09, Stephane Marchesin wrote:

  Bob Pendleton

Bob Pendleton wrote:

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x22 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x23 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x24 24 tc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x25 24 tc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x26 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x27 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x28 24 tc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x29 24 tc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x2a 24 tc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2b 24 tc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2c 24 tc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x2d 24 tc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x2e 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x2f 24 dc 0 32 0 r . . 8 8 8 0 0 24 8 16 16 16 16 0 0 None
0x30 24 dc 0 32 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 None
0x31 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x32 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x33 24 dc 0 32 0 r . . 8 8 8 0 0 16 0 16 16 16 16 0 0 None
0x34 24 dc 0 32 0 r . . 8 8 8 8 0 16 0 16 16 16 16 0 0 None
0x35 24 dc 0 32 0 r y . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x36 24 dc 0 32 0 r y . 8 8 8 8 0 0 0 16 16 16 16 0 0 None
0x37 24 dc 0 32 0 r . . 8 8 8 0 0 0 0 16 16 16 16 0 0 None
0x38 24 dc 0 32 0 r . . 8 8 8 8 0 0 0 16 16 16 16 0 0 None

Here is what I get :

[snip]

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce4 Ti 4200 with AGP8X/AGP/SSE/3DNOW!
OpenGL version string: 1.4.0 NVIDIA 44.96

[snip]

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x21 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x22 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x23 16 tc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x24 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x25 16 tc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x26 16 tc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x27 16 tc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x28 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x29 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x2a 16 tc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x2b 16 tc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon
0x2c 16 dc 0 16 0 r . . 5 6 5 0 0 16 0 16 16 16 16 0 0 None
0x2d 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2e 16 dc 0 16 0 r . . 5 6 5 0 0 24 8 16 16 16 16 0 0 None
0x2f 16 dc 0 16 0 r y . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x30 16 dc 0 16 0 r . . 5 6 5 0 0 0 0 16 16 16 16 0 0 None
0x31 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 2 1 Ncon
0x32 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 2 1 Ncon
0x33 16 dc 0 16 0 r y . 5 6 5 0 0 16 0 16 16 16 16 4 1 Ncon
0x34 16 dc 0 16 0 r y . 5 6 5 0 0 24 8 16 16 16 16 4 1 Ncon

Quite strange, I have the same drivers (44.96) and the multisample
buffers/number of samples are listed fine.

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

Well I don’t have it set.

sami at high-voltage ~ $ set | grep FSAA
export __GL_FSAA_MODE=$1

which is a line of my fsaa setting function.

My glxinfo gives tons of modes from which of the following ones are ms
modes.

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eatOn Wednesday 03 September 2003 02:05, Bob Pendleton wrote:

On Tue, 2003-09-02 at 17:40, Bob Pendleton wrote:

On Tue, 2003-09-02 at 16:09, Stephane Marchesin wrote:

I’m starting to smell a memory size problem. My default depth is 24
bits and all my visual depths are 24 bits. Yours are all 16 bits.
That means you are using half the memory that I am. There is a good
chance that if I changed my default depth to 16 it would work.

Nope, I tried setting my default depth to 16 and even reduce my
resolution. Still no sample buffers. I still get fsaa when I set the
__GL_FSAA_MODE=4

Are you folks sure you don’t have that flag set in the shell that
starts X?


0x2e 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 2 1 Ncon
0x2f 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 2 1 Ncon
0x30 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 2 1 Ncon
0x31 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 2 1 Ncon
0x32 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 4 1 Ncon
0x33 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 4 1 Ncon
0x34 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 4 1 Ncon
0x35 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 4 1 Ncon
0x41 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 2 1 Ncon
0x42 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 2 1 Ncon
0x43 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 2 1 Ncon
0x44 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 2 1 Ncon
0x45 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 4 1 Ncon
0x46 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 4 1 Ncon
0x47 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 4 1 Ncon
0x48 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 4 1 Ncon

But my GFX card is a GF4 ti4600, so maybe that’s the difference.

I can check my GF2 GTS later today.

Could someone who has this working send me a copy of their XF86Config
file? I have tested fsaa at a lower pixel depth and lower resolution and
I still don’t get any multisample modes. My guess is that I either
missed something in the configuration, or my card just doesn’t support
multisample. I bought my my GeForce 2/MX very shortly after they came
out and it may just not support everything…

It is so weird, when I set __GL_FSAA_MODE=4 I do get anti-aliasing. But,
I do not have any multisample visuals.

	Bob PendletonOn Tue, 2003-09-02 at 22:24, Sami N??t?nen wrote:

On Wednesday 03 September 2003 02:05, Bob Pendleton wrote:

On Tue, 2003-09-02 at 17:40, Bob Pendleton wrote:

On Tue, 2003-09-02 at 16:09, Stephane Marchesin wrote:

I’m starting to smell a memory size problem. My default depth is 24
bits and all my visual depths are 24 bits. Yours are all 16 bits.
That means you are using half the memory that I am. There is a good
chance that if I changed my default depth to 16 it would work.

Nope, I tried setting my default depth to 16 and even reduce my
resolution. Still no sample buffers. I still get fsaa when I set the
__GL_FSAA_MODE=4

Are you folks sure you don’t have that flag set in the shell that
starts X?

Well I don’t have it set.

sami at high-voltage ~ $ set | grep FSAA
export __GL_FSAA_MODE=$1

which is a line of my fsaa setting function.

My glxinfo gives tons of modes from which of the following ones are ms
modes.

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x2e 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 2 1 Ncon
0x2f 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 2 1 Ncon
0x30 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 2 1 Ncon
0x31 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 2 1 Ncon
0x32 24 tc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 4 1 Ncon
0x33 24 tc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 4 1 Ncon
0x34 24 tc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 4 1 Ncon
0x35 24 tc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 4 1 Ncon
0x41 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 2 1 Ncon
0x42 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 2 1 Ncon
0x43 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 2 1 Ncon
0x44 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 2 1 Ncon
0x45 24 dc 0 32 0 r y . 8 8 8 0 0 24 8 16 16 16 16 4 1 Ncon
0x46 24 dc 0 32 0 r y . 8 8 8 8 0 24 8 16 16 16 16 4 1 Ncon
0x47 24 dc 0 32 0 r y . 8 8 8 0 0 16 0 16 16 16 16 4 1 Ncon
0x48 24 dc 0 32 0 r y . 8 8 8 8 0 16 0 16 16 16 16 4 1 Ncon

But my GFX card is a GF4 ti4600, so maybe that’s the difference.

I can check my GF2 GTS later today.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

Bob Pendleton wrote:

Could someone who has this working send me a copy of their XF86Config
file? I have tested fsaa at a lower pixel depth and lower resolution and
I still don’t get any multisample modes. My guess is that I either
missed something in the configuration, or my card just doesn’t support
multisample. I bought my my GeForce 2/MX very shortly after they came
out and it may just not support everything…

It is so weird, when I set __GL_FSAA_MODE=4 I do get anti-aliasing. But,
I do not have any multisample visuals.

  Bob Pendleton

Here is what I found :
http://groups.google.fr/groups?dq=&hl=fr&lr=&ie=UTF-8&oe=UTF-8&threadm=NeuYa.168783%24xg5.157833%40twister.austin.rr.com&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Dfr%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26group%3Dcomp.graphics.api.opengl%26start%3D200

There seem to be two ways of getting of fsaa (on nvidia cards at least)
: supersampling and multisampling. With a geforce1 or 2 you get
supersampling, while a geforce 3 or higher will give you multisampling.
So there are no multisample buffers on a geforce 2. The documentation
coming with the Nvidia drivers seems to confirm that :

__GL_FSAA_MODE GeForce, GeForce2, Quadro, and Quadro2 Pro-----------------------------------------------------------------------
0 FSAA disabled
1 FSAA disabled
2 FSAA disabled
3 1.5 x 1.5 Supersampling
4 2 x 2 Supersampling
5 FSAA disabled
[notice “supersampling” here…]

__GL_FSAA_MODE GeForce4 MX, GeForce4 4xx Go, Quadro4 380,550,580 XGL,
and Quadro4 NVS

0 FSAA disabled
1 2x Bilinear Multisampling
2 2x Quincunx Multisampling
3 FSAA disabled
4 2 x 2 Supersampling
5 FSAA disabled
[… and “multisampling” here]

Stephane

Bob Pendleton wrote:

Could someone who has this working send me a copy of their XF86Config
file? I have tested fsaa at a lower pixel depth and lower resolution and
I still don’t get any multisample modes. My guess is that I either
missed something in the configuration, or my card just doesn’t support
multisample. I bought my my GeForce 2/MX very shortly after they came
out and it may just not support everything…

It is so weird, when I set __GL_FSAA_MODE=4 I do get anti-aliasing. But,
I do not have any multisample visuals.

Bob Pendleton

Here is what I found :
http://groups.google.fr/groups?dq=&hl=fr&lr=&ie=UTF-8&oe=UTF-8&threadm=NeuYa.168783%24xg5.157833%40twister.austin.rr.com&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Dfr%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26group%3Dcomp.graphics.api.opengl%26start%3D200

Thank You,

That matches what I found and it it matches (pretty closely) to what I
said in my original message on the subject. But, Sam says it works for
him on his GeForce 2 MX?

Anyway, it looks like the multisample support that has been added to SDL
does not work on many cards that are capable of FSAA. The GeForce 2 MX
does a great job of FSAA via super sampling, but does not provide any
way to enable it through OpenGL.

In other words, multisample support gives access to a subset of the
available hardware support for FSAA.

I know about the magic environment variable you have to set to turn on
FSAA for older NVidia cards. Are there similar bits of magic of the
other video cards (such as ATI)?

	Bob PendletonOn Wed, 2003-09-03 at 18:23, Stephane Marchesin wrote:

There seem to be two ways of getting of fsaa (on nvidia cards at least)
: supersampling and multisampling. With a geforce1 or 2 you get
supersampling, while a geforce 3 or higher will give you multisampling.
So there are no multisample buffers on a geforce 2. The documentation
coming with the Nvidia drivers seems to confirm that :

__GL_FSAA_MODE GeForce, GeForce2, Quadro, and Quadro2 Pro

0 FSAA disabled
1 FSAA disabled
2 FSAA disabled
3 1.5 x 1.5 Supersampling
4 2 x 2 Supersampling
5 FSAA disabled
[notice “supersampling” here…]

__GL_FSAA_MODE GeForce4 MX, GeForce4 4xx Go, Quadro4 380,550,580 XGL,
and Quadro4 NVS

0 FSAA disabled
1 2x Bilinear Multisampling
2 2x Quincunx Multisampling
3 FSAA disabled
4 2 x 2 Supersampling
5 FSAA disabled
[… and “multisampling” here]

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

That matches what I found and it it matches (pretty closely) to what I
said in my original message on the subject. But, Sam says it works for
him on his GeForce 2 MX?

I could have sworn that it did, but I guess I was mistaken. I just
tried it and got:
Couldn’t set GL mode: Couldn’t find matching GLX visual

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment