Open gl fps!

why i can’t have more than 100 fps (nothing to do ) with open gl and direct3D
shows 200 fps !!(nothing to do so)

What does this question mean? What application are you running? Where’s
the code?

???

SteveOn April 16, 2002 07:29 pm, IronRaph at aol.com wrote:

why i can’t have more than 100 fps (nothing to do ) with open gl and
direct3D shows 200 fps !!(nothing to do so)

Hi!
Probably you’re running NVidia-drivers, and those make the rendering
achieve the same framerate as your monitor. And btw., why would you need
more? Just try rendering some plasma or another “real” demo effect and
you’ll see your framerate drop, as this reaches OpenGL’s possibilities.
Cheers,
St0fF.

At 17:59 16.04.2002 -0400, you wrote:>why i can’t have more than 100 fps (nothing to do ) with open gl and

direct3D shows 200 fps !!(nothing to do so)

=========================================================
!!! COMMODORE – anything else is just a COMPROMISE !!!

St0fF 64 / N30PLA51A - 4 Love, Code, Composition & Design

What’s the difference between a VIC-Chip and a GeForce-GPU?
The Age …

I recommend that you use the OpenGL forums at www.opengl.org for these type of questions. Once you start doing OpenGL, SDL is pretty much out of the picture- it’s consuming none of your processor, it only sets up your window…----- Original Message -----
From: IronRaph at aol.com
To: sdl at libsdl.org
Sent: Tuesday, April 16, 2002 2:59 PM
Subject: [SDL] open gl fps!

why i can’t have more than 100 fps (nothing to do ) with open gl and direct3D shows 200 fps !!(nothing to do so)

That’s something you can change via:

Display Properties -> settings -> advanced -> Your Card’s Name -> additional
properties -> openGL settings -> Vertical sync.

Just make sure that vertical sync is “off by default”.

Depending upon your drivers, you may not have exactly that series of steps
to reach your vertical sync control. But hunt around and you’ll find it…> ----- Original Message -----

From: st0ff@gmx.net (Stefan Hubner)
To:
Sent: Tuesday, April 16, 2002 3:57 PM
Subject: Re: [SDL] open gl fps!

Hi!
Probably you’re running NVidia-drivers, and those make the rendering
achieve the same framerate as your monitor. And btw., why would you need
more? Just try rendering some plasma or another “real” demo effect and
you’ll see your framerate drop, as this reaches OpenGL’s possibilities.
Cheers,
St0fF.

At 17:59 16.04.2002 -0400, you wrote:

why i can’t have more than 100 fps (nothing to do ) with open gl and
direct3D shows 200 fps !!(nothing to do so)

=========================================================
!!! COMMODORE – anything else is just a COMPROMISE !!!

St0fF 64 / N30PLA51A - 4 Love, Code, Composition & Design

What’s the difference between a VIC-Chip and a GeForce-GPU?
The Age …


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

It should be mentioned that, apart from benchmarking craze, this doesn’t do
anything useful. If it has any effect at all it introduces tearing which
reduces image quality. You don’t gain anything from running at fps higher
than your monitor’s vsync.

VSync was invented for a reason.

cu,
NicolaiAm Mittwoch, 17. April 2002 01:48 schrieb Blake Senftner:

That’s something you can change via:

Display Properties -> settings -> advanced -> Your Card’s Name ->
additional properties -> openGL settings -> Vertical sync.

Just make sure that vertical sync is “off by default”.

Depending upon your drivers, you may not have exactly that series of steps
to reach your vertical sync control. But hunt around and you’ll find it…

kk so open gl is faster than Direct3D :slight_smile:

Don’t forget that when developing software that needs to run at the highest
possible frame rates, having vsync off during development gives the
developer a sense of what’s eating CPU as features are developed. Sure it’s
not the best measurement, actually somewhat poor, but any measurement is
better than none at all.

-Blake> ----- Original Message -----

From: prefect_@gmx.net (Nicolai Haehnle)
To:
Sent: Wednesday, April 17, 2002 6:06 AM
Subject: Re: [SDL] open gl fps!

Am Mittwoch, 17. April 2002 01:48 schrieb Blake Senftner:

That’s something you can change via:

Display Properties -> settings -> advanced -> Your Card’s Name ->
additional properties -> openGL settings -> Vertical sync.

Just make sure that vertical sync is “off by default”.

Depending upon your drivers, you may not have exactly that series of
steps

to reach your vertical sync control. But hunt around and you’ll find
it…

It should be mentioned that, apart from benchmarking craze, this doesn’t
do
anything useful. If it has any effect at all it introduces tearing which
reduces image quality. You don’t gain anything from running at fps higher
than your monitor’s vsync.

VSync was invented for a reason.

cu,
Nicolai


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Well, if you can do double the VSYNC, you can do nifty effects like
blur, no?

-bill!On Wed, Apr 17, 2002 at 03:06:19PM +0200, Nicolai Haehnle wrote:

It should be mentioned that, apart from benchmarking craze, this doesn’t do
anything useful. If it has any effect at all it introduces tearing which
reduces image quality. You don’t gain anything from running at fps higher
than your monitor’s vsync.

VSync was invented for a reason.

No, nifty effects like blur are done by combining offscreen surfaces.
You’re still only modifying your primary surface at <=vsync.

Nicolai is right… there’s no point in swapping buffers at any rate faster
than the vsync – if anything, the rendering will look worse.> ----- Original Message -----

From: nbs@sonic.net (Bill Kendrick)
To:
Sent: Wednesday, April 17, 2002 11:44 AM
Subject: Re: [SDL] open gl fps!

On Wed, Apr 17, 2002 at 03:06:19PM +0200, Nicolai Haehnle wrote:

It should be mentioned that, apart from benchmarking craze, this doesn’t
do

anything useful. If it has any effect at all it introduces tearing which
reduces image quality. You don’t gain anything from running at fps
higher

than your monitor’s vsync.

VSync was invented for a reason.

Well, if you can do double the VSYNC, you can do nifty effects like
blur, no?

-bill!


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

No, nifty effects like blur are done by combining offscreen surfaces.
You’re still only modifying your primary surface at <=vsync.

Well, yes, but it’s nice to know that your video card can render fast
enough to DO this. :slight_smile:

Nicolai is right… there’s no point in swapping buffers at any rate faster
than the vsync – if anything, the rendering will look worse.

I don’t disagree with this. :wink: When are PCs going to have decent
sprites, btw? :wink:

-bill!On Wed, Apr 17, 2002 at 11:52:06AM -0700, Eron Hennessey wrote:

Sprites…? remembers the Amiga days of abusing sprite channels to
create nice effects without burning blitter power…

IMHO, sprites never were anything but a performance hack. Fast h/w
accelerated rendering into a frame buffer is so much more flexible. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 17 April 2002 21:07, nbs wrote:

On Wed, Apr 17, 2002 at 11:52:06AM -0700, Eron Hennessey wrote:

No, nifty effects like blur are done by combining offscreen surfaces.
You’re still only modifying your primary surface at <=vsync.

Well, yes, but it’s nice to know that your video card can render fast
enough to DO this. :slight_smile:

Nicolai is right… there’s no point in swapping buffers at any rate
faster than the vsync – if anything, the rendering will look worse.

I don’t disagree with this. :wink: When are PCs going to have decent
sprites, btw? :wink:

Well, yes, but it’s nice to know that your video card can render fast
enough to DO this. :slight_smile:

Well, unless you’ve got a super monitor that’s going >100Mhz, it’s not going
to be a problem. I really don’t know that many people who are running
refresh rates much higher than 85 Hz, which means that they’ll max out at 85
fps.

…beside that fact, I don’t know of many games that actually run at fps >
60. Most are lucky to actually get 30 fps…

So what is your monitor refresh rate set at, anyway? If you’re getting >
100Mhz, I’d like to know what your monitor/video card config is, and at what
screen resolution.

Hmm, according to my monitor, I have a 100Hz refresh rate. How can any
OpenGL implementation seriously be faster than that?

It’s ponderous.On Wed, Apr 17, 2002 at 08:36:04AM -0400, IronRaph at aol.com wrote:

kk so open gl is faster than Direct3D :slight_smile:


Joseph Carter I N33D MY G4M3Z, D00D!!!111!!
(Just … don’t ask)

Internet censorship. Because your children need to be
protected from naked women, medical procedures, diverse
cultures, and violent video games.
(but information on building bombs, stealing cable, and
manufacturing drugs is okay…)

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020417/ccb3643e/attachment.pgp

Actually, what really helps is to have a usec counter and save the value
at the beginning of a frame, then after each stage of rendering. It’s a
series of subtractions to figure out how long any particular thing took,
and SDL gives you msecs which aren’t precise enough for partial frame
timings in OpenGL with decent hardware.

Getting usec in Linux is easy. In Win32 it’s a real PITA since you can’t
depend on the supplied functions to be at all reliable. Other platforms
vary significantly in ease of getting proper timing data. Some platforms
don’t even provide the mechanisms necessary to get usec timing. Nice,
isn’t it?On Wed, Apr 17, 2002 at 08:36:21AM -0700, Blake Senftner wrote:

Don’t forget that when developing software that needs to run at the highest
possible frame rates, having vsync off during development gives the
developer a sense of what’s eating CPU as features are developed. Sure it’s
not the best measurement, actually somewhat poor, but any measurement is
better than none at all.


Joseph Carter No conceit in my family

Yeah, I looked at esd and it looked like the kind of C code that an
ex-JOVIAL/Algol '60 coder who had spent the last 20 years bouncing
between Fortran-IV and Fortran '77 would write.

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020417/9517e2db/attachment.pgp

This thread is some kind off topic, but anyhow …

< Well, if you can do double the VSYNC, you can do nifty effects like
< blur, no?

NO! Therefore you have things like the accumulation buffer.

< When are PCs going to have decent
< sprites, btw? :wink:
Why need sprites when you have exactly the same effect with even more
possibilities by just rendering a textured quad? (Sprites had to have a
certain size - textured quads don’t, only the texture has some limitations
of size.)
Now just a comparison:
on the commodore you had to:
lda # 3
sta $dd01 ; set i/o for video bank
lda # (data_in_memory)/0x4000
sta $dd00 ; set proper video bank
lda # ((memaddress)&(0x3fff))/0x40
sta spritepointer_in_proper_bank
lda # spritepos_x_lsb
sta $d000 ;lets use sprite #0
lda # spritepos_x_msb
ora $d010
sta $d010
lda # spritepos_y
sta $d001
lda #1
sta $d015 ; enable sprite #0
lda #(multicolor ? 1 : 0)
sta $d01b ; not sure any more about the following addresses,
but it’s just to show the way…
lda #(x_expand ? 1 : 0)
sta $d017
lda #(y_expand ? 1 : 0)
sta $d014
lda # backgroundcolor
sta $d022
lda # foregroundcolor
sta $d028
if(multicolor) {
lda # multicol_1
sta $d023
lda # multicol_2
sta $d024
}
… and now the sprite would’ve been visible …
in OpenGL … make sure, the texture is already uploaded (the C=64-example
also had the sprite-pixel-data at some place in memory…)
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,texture);
glColor4ub(255,255,255,255);
glBegin(GL_QUADS);
glTexCoord2i(0,0);
glVertex2f(-sprite_x,-sprite_y);
glTexCoord2i(0,1);
glVertex2f(-sprite_x,spritesize_y-sprite_y);
glTexCoord2i(1,1);
glVertex2f(spritesize_x-sprite_x,spritesize_y-sprite_y);
glTexCoord2i(1,0);
glVertex2f(spritesize_x-sprite_x,-sprite_y);
glEnd();
Now please tell me again, why would you need sprites???

< IMHO, sprites never were anything but a performance hack.
Hae? Sprites were the first h/w accelerated textures on commodore
computers. With a damn lot of limitations - always same size (except for
the possibility of doubling size in either x or y or both), a max of 4
colors …

< Hmm, according to my monitor, I have a 100Hz refresh rate. How can any
< OpenGL implementation seriously be faster than that?
By not drawing that much … if you just draw one triangle and then update
the screen, and if you have vsync_by_default turned off, you might get
something close to 1000fps - it just depends on you graphics card.=========================================================
!!! COMMODORE – anything else is just a COMPROMISE !!!

St0fF 64 / N30PLA51A - 4 Love, Code, Composition & Design

What’s the difference between a VIC-Chip and a GeForce-GPU?
The Age …

Now just a comparison:
on the commodore you had to:
lda # 3
sta $dd01 ; set i/o for video bank
lda # (data_in_memory)/0x4000
sta $dd00 ; set proper video bank
lda # ((memaddress)&(0x3fff))/0x40
sta spritepointer_in_proper_bank

Mmm-- 6502 assembler. After BASIC, that was my first real programming
language. I still frequently tend to think in assembly language, for
good or bad. I even like to understand the final machine code generated
when my C/C++ compiles.

I gotta get a life.

-Roy

[…]

< IMHO, sprites never were anything but a performance hack.
Hae? Sprites were the first h/w accelerated textures on commodore
computers. With a damn lot of limitations - always same size (except
for the possibility of doubling size in either x or y or both), a max
of 4 colors …

Well, that’s exactly what I meant. Limitations, general pain and then
limitations - all because software blitting would have been way too
slow… “Performance hack.”

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 18 April 2002 02:53, St0fF 64 wrote:

[…?s timing for benchmarking…]

If it’s only for benchmarking during development…

—8<------------------------------------------------------

if defined(WIN32) || defined(i386)

inline int timestamp(void)
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x >> 8;
}

else

ifdef HAVE_GETTIMEOFDAY

include <sys/time.h>

inline int timestamp(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec * 1000000 + tv.tv_usec;
}

else

inline int timestamp(void)
{
return SDL_GetTicks();
}

endif

endif

#else

define DBGT(x)

#endif
------------------------------------------------------>8—

(From sound/audio.c of Kobo Deluxe.)

Note that I’m only using these values for relative timing; that is, I put
the delta between partial timestamps in relation to the delta between
"frames". The result is the fraction of the total CPU time available that
is consumed by each part.

(Hit F9 in one of the 0.4-pre versions of Kobo Deluxe to see these
figures for the audio engine. Probably not too informative for most
people - but there’s a nice color bar effect in there! :wink:

And yeah, testing only for WIN32 before using x86 asm code isn’t really
correct, I guess… What’s the correct way of detecting Windows running
on some other CPU? (Are those versions still alive…)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 17 April 2002 22:58, Joseph Carter wrote:

At 16:37 18.04.2002 +0200, you wrote:>On Thursday 18 April 2002 02:53, St0fF 64 wrote:

[…]

< IMHO, sprites never were anything but a performance hack.
Hae? Sprites were the first h/w accelerated textures on commodore
computers. With a damn lot of limitations - always same size (except
for the possibility of doubling size in either x or y or both), a max
of 4 colors …

Well, that’s exactly what I meant. Limitations, general pain and then
limitations - all because software blitting would have been way too
slow… “Performance hack.”

Great, another problem with non-native speakers or so - I’m German, but
didn’t really understand what you meant by “performance hack”…

cheers!

=========================================================
!!! COMMODORE – anything else is just a COMPROMISE !!!

St0fF 64 / N30PLA51A - 4 Love, Code, Composition & Design

What’s the difference between a VIC-Chip and a GeForce-GPU?
The Age …