SDL1.3 - surface for created windows?

Hi! Goodmorrow! etc!

i am attemptiong to get my head round SDL 1.3, i have got myself an extra
window coming up alsonside the standard SDL window, when creating the
standard sdl window we create a surface screen then blit to that, so now i
have created a new window, how do i assign that a plottable surface? i have
been poking around the header files but cant seem to come up with much.

Thanks!

This is as far as i have got… dunno whats going on, i have to stop now
before i go completly insane

sdlwin1=SDL_CreateWindow( “window1”, 100 , 100 , 200 , 200 ,
SDL_WINDOW_SHOWN );

SDL_CreateRenderer(sdlwin1, -1 , SDL_RENDERER_ACCELERATED);
SDL_SelectRenderer(sdlwin1);

headtex=SDL_CreateTextureFromSurface(0, head);

headlesstex=SDL_CreateTextureFromSurface(0, headless);

SDL_RenderCopy(headlesstex,NULL,NULL);

SDL_RenderPresent();

I thought the consensus was that it is too late.

Jonny DOn Thu, May 7, 2009 at 12:35 PM, Neil White wrote:

This is as far as i have got… dunno whats going on, i have to stop now
before i go completly insane

This is as far as i have got… dunno whats going on, i have to stop now
before i go completly insane

I thought the consensus was that it is too late.

Oh, we’re all mad here, as the cat put it… :-D>----- Original Message ----

From: Jonathan Dearborn
Subject: Re: [SDL] SDL1.3 - surface for created windows?
On Thu, May 7, 2009 at 12:35 PM, Neil White wrote:

i am attemptiong to get my head round SDL 1.3, i have got myself an extra
window coming up alsonside the standard SDL window, when creating the
standard sdl window we create a surface screen then blit to that, so now i
have created a new window, how do i assign that a plottable surface? i have
been poking around the header files but cant seem to come up with much.

You called SDL_SetVideoMode? From what I understand of this, you get
two windows, and that’s one more than what you want? SDL_SetVideoMode
and surfaces is “old and busted”, SDL_CreateWindow and textures is the
"new hotness". If you call both, well, you get two windows. :slight_smile:

SDL_SetVideoMode is in SDL_compat.h, in case you haven’t noticed,
which is the SDL 1.2 backward compatibility layer (which translates
into SDL 1.3 calls). If you mix the two, it’ll work, but you might be
surprised at moments, like what is happening to you, possibly? The new
way is to use textures rather than surfaces, as much as possible (I
think just image loading still uses only surfaces? I don’t use that
bit).

If you want to be able to lock a texture and draw directly into it
(pixel access), you have to make it “streamable” when you create it,
but if you’re able to draw by using only the SDL functions (without
direct pixel access, but rather blitting other textures and so on), a
"static" texture could be faster (depends on the rendering backend).

In your code sample, what’s headtex used for? You create it, then… Nothing?On Thu, May 7, 2009 at 3:06 AM, Neil White wrote:


http://pphaneuf.livejournal.com/

2009/5/7 Pierre Phaneuf

i am attemptiong to get my head round SDL 1.3, i have got myself an extra
window coming up alsonside the standard SDL window, when creating the
standard sdl window we create a surface screen then blit to that, so now
i
have created a new window, how do i assign that a plottable surface? i
have
been poking around the header files but cant seem to come up with much.

You called SDL_SetVideoMode? From what I understand of this, you get
two windows, and that’s one more than what you want? SDL_SetVideoMode
and surfaces is “old and busted”, SDL_CreateWindow and textures is the
"new hotness". If you call both, well, you get two windows. :slight_smile:

SDL_SetVideoMode is in SDL_compat.h, in case you haven’t noticed,
which is the SDL 1.2 backward compatibility layer (which translates
into SDL 1.3 calls). If you mix the two, it’ll work, but you might be
surprised at moments, like what is happening to you, possibly? The new
way is to use textures rather than surfaces, as much as possible (I
think just image loading still uses only surfaces? I don’t use that
bit).

i tired it both with the original sdl video mode and without

If you want to be able to lock a texture and draw directly into it
(pixel access), you have to make it “streamable” when you create it,
but if you’re able to draw by using only the SDL functions (without
direct pixel access, but rather blitting other textures and so on), a
"static" texture could be faster (depends on the rendering backend).

In your code sample, what’s headtex used for? You create it, then…
Nothing?

i have previously loading bmps to surfaces them am using the get texture
from surface function

basically what if freaking me out is the whole no more screen to plot to, i
create a window, then set that as rendering target… then dunno, is the
generl idea to have a texture for the window use that as you would the
screen surface, then put that in the window when plotting is over?> On Thu, May 7, 2009 at 3:06 AM, Neil White wrote:

basically what if freaking me out is the whole no more screen to plot to, i
create a window, then set that as rendering target… then dunno, is the
generl idea to have a texture for the window use that as you would the
screen surface, then put that in the window when plotting is over?

The idea is to better reflect what happens with modern hardware, where
the window content is either completely inaccessible (like it’s always
been on X11, and SDL just faked it), or generally difficult/expensive
to access (might be encoded in a certain way, or small read/writes
over the bus just suck and require hardware accelerator flushes, etc),
not to mention that it’s always been kind of unsafe even when you
could do it, in a windowed environment.

So what you get is more representative of what the hardware likes to
do. You get textures that you can blit to windows, a few other drawing
operations, and that’s it. After that, if you liked the old way, you
can do what SDL 1.2 did (and what the compatibility layer does), and
fake it, by creating a single streamable texture the size of the
window, lock it, draw to it, then unlock it and blit the whole thing
to the screen (like SDL_Flip did) or just the parts that you changed
(like SDL_UpdateRect). You’ll get similar performance to SDL 1.2,
which might be the best possible on the platform anyway (if you’re on
something like X11), or much slower than what would be possible (the
OpenGL backend can do much better if used “properly”, I think?).

Just consider what you’re doing when you “plot to the screen”: do you
refresh the dirty parts of the background, then put the new characters
in their updated position, for example? Put the background and the
characters art in textures (they could be static, if you’re not
updating them, which allows for more possibilities for acceleration),
then use a bunch of SDL_RenderCopy and SDL_RenderFill to do the
plotting (there’s other functions, but these three are the “most
likely to be fast ones”).On Fri, May 8, 2009 at 4:59 AM, Neil White wrote:


http://pphaneuf.livejournal.com/

I think many developers haven’t yet got around the idea that nowadays
graphic cards
are completly different than some years ago.

Many of the old 2D acelerated operations are no longer there. If you are
lucky you’ll
get hardware cursors and the most common GUI operations accelerated,
everything
else are pure 3D operations.

This is the main reason why trying to access the graphics buffers like on
the old
days is so costly.

I for one, welcome the changes being made for the upcoming 1.3 version.–
Paulo

On Fri, May 8, 2009 at 6:50 PM, Pierre Phaneuf wrote:

On Fri, May 8, 2009 at 4:59 AM, Neil White wrote:

basically what if freaking me out is the whole no more screen to plot to,
i
create a window, then set that as rendering target… then dunno, is the
generl idea to have a texture for the window use that as you would the
screen surface, then put that in the window when plotting is over?

The idea is to better reflect what happens with modern hardware, where
the window content is either completely inaccessible (like it’s always
been on X11, and SDL just faked it), or generally difficult/expensive
to access (might be encoded in a certain way, or small read/writes
over the bus just suck and require hardware accelerator flushes, etc),
not to mention that it’s always been kind of unsafe even when you
could do it, in a windowed environment.

So what you get is more representative of what the hardware likes to
do. You get textures that you can blit to windows, a few other drawing
operations, and that’s it. After that, if you liked the old way, you
can do what SDL 1.2 did (and what the compatibility layer does), and
fake it, by creating a single streamable texture the size of the
window, lock it, draw to it, then unlock it and blit the whole thing
to the screen (like SDL_Flip did) or just the parts that you changed
(like SDL_UpdateRect). You’ll get similar performance to SDL 1.2,
which might be the best possible on the platform anyway (if you’re on
something like X11), or much slower than what would be possible (the
OpenGL backend can do much better if used “properly”, I think?).

Just consider what you’re doing when you “plot to the screen”: do you
refresh the dirty parts of the background, then put the new characters
in their updated position, for example? Put the background and the
characters art in textures (they could be static, if you’re not
updating them, which allows for more possibilities for acceleration),
then use a bunch of SDL_RenderCopy and SDL_RenderFill to do the
plotting (there’s other functions, but these three are the “most
likely to be fast ones”).


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I think many developers haven’t yet got around the idea that nowadays
graphic cards are completly different than some years ago.

Indeed. I often find that game developers, in particular, tend to be
an especially conservative bunch, and “change their ways” only when
there truly is no other choice. Switching to hardware accelerated 3D
was one of these moments, and to lesser degrees, hardware
transform/lighting (remember the first GeForce?) and the switch from
fixed-function to shaders.

Even then, I remember the first missteps of hardware 3D, like the S3
ViRGE and the very first nVidia (which did not use triangle-based
geometry!), both of which were total flops and had game developers
look down at hardware 3D for a while as some useless gadget or as
snake oil…

Many of the old 2D acelerated operations are no longer there. If you are
lucky you’ll get hardware cursors and the most common GUI operations accelerated,
everything else are pure 3D operations.

Note that in games, 2D acceleration was very rarely used! SDL could
use it, mostly on Windows with the DirectDraw backend, but it’s been
replaced by a pretty much entirely software GDI backend. It was pretty
awful, though, having a game that would work fine on Windows, then
boot into Linux on the same machine and have it be unbearably slow…
So you just had to do like everyone, assume no acceleration and code
like the good old DOS days!On Fri, May 8, 2009 at 2:31 PM, Paulo Pinto wrote:


http://pphaneuf.livejournal.com/

Thanks for pointing that out.

Since I left university (1999), I was forced to move slowly back into
Windows as
my main OS, because of some of my hobbies. So I tend not to follow all Linux
developments
as I used to.

I assume that the only way to manage 2D acceleration in Linux would be with
the
XRender extension, right?–
Paulo

On Fri, May 8, 2009 at 11:41 PM, Pierre Phaneuf wrote:

Many of the old 2D acelerated operations are no longer there. If you are
lucky you’ll get hardware cursors and the most common GUI operations
accelerated,
everything else are pure 3D operations.

Note that in games, 2D acceleration was very rarely used! SDL could
use it, mostly on Windows with the DirectDraw backend, but it’s been
replaced by a pretty much entirely software GDI backend. It was pretty
awful, though, having a game that would work fine on Windows, then
boot into Linux on the same machine and have it be unbearably slow…
So you just had to do like everyone, assume no acceleration and code
like the good old DOS days!


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Check:
http://sdl.beuc.net/sdl.wiki/FAQ_No_Hardware_Acceleration

Short: you need to be root.

I think the only reasonably way to get hardware acceleration is to
code 2D on top of OpenGL and assume the user’s card has 3D support
(hardware and software, the latter being the more problematic).–
Sylvain

On Sat, May 09, 2009 at 07:02:30PM +0200, Paulo Pinto wrote:

Thanks for pointing that out.

Since I left university (1999), I was forced to move slowly back into
Windows as
my main OS, because of some of my hobbies. So I tend not to follow all Linux
developments
as I used to.

I assume that the only way to manage 2D acceleration in Linux would be with
the
XRender extension, right?


Paulo

On Fri, May 8, 2009 at 11:41 PM, Pierre Phaneuf wrote:

Many of the old 2D acelerated operations are no longer there. If you are
lucky you’ll get hardware cursors and the most common GUI operations
accelerated,
everything else are pure 3D operations.

Note that in games, 2D acceleration was very rarely used! SDL could
use it, mostly on Windows with the DirectDraw backend, but it’s been
replaced by a pretty much entirely software GDI backend. It was pretty
awful, though, having a game that would work fine on Windows, then
boot into Linux on the same machine and have it be unbearably slow…
So you just had to do like everyone, assume no acceleration and code
like the good old DOS days!


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Thanks for the linkOn Sat, May 9, 2009 at 7:09 PM, Sylvain Beucler wrote:

Check:
http://sdl.beuc.net/sdl.wiki/FAQ_No_Hardware_Acceleration

Short: you need to be root.

I think the only reasonably way to get hardware acceleration is to
code 2D on top of OpenGL and assume the user’s card has 3D support
(hardware and software, the latter being the more problematic).


Sylvain

On Sat, May 09, 2009 at 07:02:30PM +0200, Paulo Pinto wrote:

Thanks for pointing that out.

Since I left university (1999), I was forced to move slowly back into
Windows as
my main OS, because of some of my hobbies. So I tend not to follow all
Linux
developments
as I used to.

I assume that the only way to manage 2D acceleration in Linux would be
with
the
XRender extension, right?


Paulo

On Fri, May 8, 2009 at 11:41 PM, Pierre Phaneuf wrote:

Many of the old 2D acelerated operations are no longer there. If you
are

lucky you’ll get hardware cursors and the most common GUI operations
accelerated,
everything else are pure 3D operations.

Note that in games, 2D acceleration was very rarely used! SDL could
use it, mostly on Windows with the DirectDraw backend, but it’s been
replaced by a pretty much entirely software GDI backend. It was pretty
awful, though, having a game that would work fine on Windows, then
boot into Linux on the same machine and have it be unbearably slow…
So you just had to do like everyone, assume no acceleration and code
like the good old DOS days!


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Check:
http://sdl.beuc.net/sdl.wiki/FAQ_No_Hardware_Acceleration

Short: you need to be root.

The fbcon and DirectFB support for acceleration tends to be spotty,
from what I remember. Their main focus is more toward embedded
(although not for lack of wanting to have wider support), but they
might have something better now. Even then, you can’t run those easily
from the desktop, you have to go to the a different console, and all
sort of difficulties like that for a “regular game”, peaking with
needing root, yes.

I think the only reasonably way to get hardware acceleration is to
code 2D on top of OpenGL and assume the user’s card has 3D support
(hardware and software, the latter being the more problematic).

If it covers what you need, XRender could do the trick, and the
difference between having hardware acceleration and having it the drop
to software fallback is much less harsh than that of OpenGL, where it
really goes down the drain when you don’t have hardware support.On Sat, May 9, 2009 at 1:09 PM, Sylvain Beucler wrote:


http://pphaneuf.livejournal.com/

hi,

XRender is kind of quick and accelerated on a bunch of machines…

There’s an xrender benchmark you can use to check yourself here:
http://lists.freedesktop.org/archives/xorg/2006-March/013784.html

Currently quicker than opengl based backends on a large section of linux
machines (intel based cards)… but really it’s machine and distro
dependent (but so is opengl).

cu,On Mon, May 11, 2009 at 12:08 PM, Pierre Phaneuf wrote:

On Sat, May 9, 2009 at 1:09 PM, Sylvain Beucler wrote:

Check:
http://sdl.beuc.net/sdl.wiki/FAQ_No_Hardware_Acceleration

Short: you need to be root.

The fbcon and DirectFB support for acceleration tends to be spotty,
from what I remember. Their main focus is more toward embedded
(although not for lack of wanting to have wider support), but they
might have something better now. Even then, you can’t run those easily
from the desktop, you have to go to the a different console, and all
sort of difficulties like that for a “regular game”, peaking with
needing root, yes.

I think the only reasonably way to get hardware acceleration is to
code 2D on top of OpenGL and assume the user’s card has 3D support
(hardware and software, the latter being the more problematic).

If it covers what you need, XRender could do the trick, and the
difference between having hardware acceleration and having it the drop
to software fallback is much less harsh than that of OpenGL, where it
really goes down the drain when you don’t have hardware support.


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

i’m still having greif with this, i just dont get about updating the windows
and then updating so stuff can be seen… anyone have a bit of example code i
can poke at… just something simple that inits sdl, and creates a window
and updates it in a loop or something, i dont have the code i was working on
to handas i am at work and it is on my computer at home… bah… and looks
like i am working all day… bah.

Currently quicker than opengl based backends on a large section of linux
machines (intel based cards)…? but really it’s machine and distro
dependent (but so is opengl).

I’d suspect that even on an Intel GMA, OpenGL wouldn’t be
significantly slower than XRender. The great trick with XRender is
that even if it’s software, it might still be survivable (and it’s
also more likely to be supported and accelerated anyway)…On Mon, May 11, 2009 at 12:19 AM, Ren? Dudfield wrote:


http://pphaneuf.livejournal.com/

i’m still having greif with this, i just dont get about updating the windows
and then updating so stuff can be seen… anyone have a bit of example code i
can poke at… just something simple that inits sdl, and creates a window
and updates it in a loop or something, i dont have the code i was working on
to handas i am at work and it is on my computer at home… bah… and looks
like i am working all day… bah.

Did you look at testsprite2.c?

-Sam Lantinga, Founder and President, Galaxy Gameworks LLC

Did you look at testsprite2.c?

nope! missed something again! i’ll take a look when i get the chance, at
least i have managed to create a massive thread to read about X gfx systems!

what does testsprite2.c do?

hi,

these two links explain the intel gfx driver situation on linux fairly
well…

https://wiki.ubuntu.com/X/Troubleshooting/IntelPerformance
https://wiki.ubuntu.com/ReinhardTartler/X/RevertingIntelDriverTo2.4

there’s a rewrite happening… and currently not all issues are solved.
They’re basically at the stage where it’s working for a bunch of people…
and they need to iron out bugs, and optimise it now :slight_smile:

Currently turning off compiz, and using non-opengl is faster for many people
at the moment.

So there’s been lots more optimisation work in the xrender/X drivers. If
their experiments work out their in kernel - opengl based work will be
quicker for 2d/video eventually… but we’ll see :slight_smile:

cu,On Mon, May 11, 2009 at 7:12 PM, Pierre Phaneuf wrote:

On Mon, May 11, 2009 at 12:19 AM, Ren? Dudfield <@Rene_Dudfield> wrote:

Currently quicker than opengl based backends on a large section of linux
machines (intel based cards)… but really it’s machine and distro
dependent (but so is opengl).

I’d suspect that even on an Intel GMA, OpenGL wouldn’t be
significantly slower than XRender. The great trick with XRender is
that even if it’s software, it might still be survivable (and it’s
also more likely to be supported and accelerated anyway)…


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org