SDL2 and Software Rendering

I’m working on porting my Wolfenstein 3D source port from SDL1 over to
SDL2. One thing that isn’t clear in the migration guide is how to
handle 8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders
to. Previously I would just call SDL_BlitSurface to bring it on screen
and then finally SDL_Flip. Now it seems I have to use
SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and
finally SDL_DestroyTexture. On the surface this seems a lot less
efficient, but it could very well be what SDL1 was doing the whole time
any way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a
texture between frames? The other solutions I could think of are to
either write my own 8-bit to 32-bit code and use a streaming texture,
possibly avoiding the SDL_Surface altogether. I could also blit to an
intermediate surface to do the conversion and hope that the texture’s
pitch matches the surface pitch and memcpy.

  • Blzut3

My game is software rendered and it does pretty much what you said in
the last paragraph: I copy the framebuffer to a streaming texture and
then call SDL_RenderCopy with it (bonus for allowing scaling to be
done by the GPU). I don’t use surfaces at all. It seems the optimal
route.

But yeah, you’ll need to convert from 8-bit to 32-bit on the fly. You
could try also just converting all sprites to 32-bit when loaded and
save time, but then you lose the ability to do palette tricks (though
then you aren’t limited to 256 colors anymore).

2013/4/7, Braden Obrzut :> I’m working on porting my Wolfenstein 3D source port from SDL1 over to

SDL2. One thing that isn’t clear in the migration guide is how to
handle 8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders
to. Previously I would just call SDL_BlitSurface to bring it on screen
and then finally SDL_Flip. Now it seems I have to use
SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and
finally SDL_DestroyTexture. On the surface this seems a lot less
efficient, but it could very well be what SDL1 was doing the whole time
any way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a
texture between frames? The other solutions I could think of are to
either write my own 8-bit to 32-bit code and use a streaming texture,
possibly avoiding the SDL_Surface altogether. I could also blit to an
intermediate surface to do the conversion and hope that the texture’s
pitch matches the surface pitch and memcpy.

  • Blzut3

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

SDL2 has render targets, special textures that can be bound in place of the screen as the current rendering target and drawn to.? You can use them for offscreen rendering.

Mason________________________________
From: Braden Obrzut
To: sdl at lists.libsdl.org
Sent: Sunday, April 7, 2013 2:23 PM
Subject: [SDL] SDL2 and Software Rendering

I’m working on porting my Wolfenstein 3D source port from SDL1 over to SDL2.? One thing that isn’t clear in the migration guide is how to handle 8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders to.? Previously I would just call SDL_BlitSurface to bring it on screen and then finally SDL_Flip.? Now it seems I have to use SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and finally SDL_DestroyTexture.? On the surface this seems a lot less efficient, but it could very well be what SDL1 was doing the whole time any way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a texture between frames?? The other solutions I could think of are to either write my own 8-bit to 32-bit code and use a streaming texture, possibly avoiding the SDL_Surface altogether.? I could also blit to an intermediate surface to do the conversion and hope that the texture’s pitch matches the surface pitch and memcpy.

  • Blzut3

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

You might be interested in
thishttp://wiki.libsdl.org/moin.fcg/SDL_CreateSoftwareRenderer?highlight=(\bCategoryRender\b)|(CategoryEnum)|(CategoryStruct)|(SGFunctions)
.On Sun, Apr 7, 2013 at 5:23 PM, Braden Obrzut wrote:

I’m working on porting my Wolfenstein 3D source port from SDL1 over to
SDL2. One thing that isn’t clear in the migration guide is how to handle
8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders to.
Previously I would just call SDL_BlitSurface to bring it on screen and
then finally SDL_Flip. Now it seems I have to use
SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and
finally SDL_DestroyTexture. On the surface this seems a lot less
efficient, but it could very well be what SDL1 was doing the whole time any
way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a
texture between frames? The other solutions I could think of are to either
write my own 8-bit to 32-bit code and use a streaming texture, possibly
avoiding the SDL_Surface altogether. I could also blit to an intermediate
surface to do the conversion and hope that the texture’s pitch matches the
surface pitch and memcpy.

Even if SDL_UpdateTexture() says that it is slow, I would try it anyway.
You might get a decent speed boost if you don’t have to create texture
memory each frame. If you were using OpenGL, I’d say the same thing about
glTexSubImage2D().

Jonny DOn Sun, Apr 7, 2013 at 6:27 PM, Alex Barry <alex.barry at gmail.com> wrote:

You might be interested in thishttp://wiki.libsdl.org/moin.fcg/SDL_CreateSoftwareRenderer?highlight=(\bCategoryRender\b)|(CategoryEnum)|(CategoryStruct)|(SGFunctions)
.

On Sun, Apr 7, 2013 at 5:23 PM, Braden Obrzut wrote:

I’m working on porting my Wolfenstein 3D source port from SDL1 over to
SDL2. One thing that isn’t clear in the migration guide is how to handle
8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders to.
Previously I would just call SDL_BlitSurface to bring it on screen and
then finally SDL_Flip. Now it seems I have to use
SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and
finally SDL_DestroyTexture. On the surface this seems a lot less
efficient, but it could very well be what SDL1 was doing the whole time any
way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a
texture between frames? The other solutions I could think of are to either
write my own 8-bit to 32-bit code and use a streaming texture, possibly
avoiding the SDL_Surface altogether. I could also blit to an intermediate
surface to do the conversion and hope that the texture’s pitch matches the
surface pitch and memcpy.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

It’s slow in the sense that “data needs to be transferred to the video
hardware”. It shouldn’t be noticeable worse than blitting to screen,
if at all (provided you’re using a streaming texture).

2013/4/7, Jonathan Dearborn :> Even if SDL_UpdateTexture() says that it is slow, I would try it anyway.

You might get a decent speed boost if you don’t have to create texture
memory each frame. If you were using OpenGL, I’d say the same thing about
glTexSubImage2D().

Jonny D

On Sun, Apr 7, 2013 at 6:27 PM, Alex Barry <alex.barry at gmail.com> wrote:

You might be interested in
thishttp://wiki.libsdl.org/moin.fcg/SDL_CreateSoftwareRenderer?highlight=(\bCategoryRender\b)|(CategoryEnum)|(CategoryStruct)|(SGFunctions)
.

On Sun, Apr 7, 2013 at 5:23 PM, Braden Obrzut wrote:

I’m working on porting my Wolfenstein 3D source port from SDL1 over to
SDL2. One thing that isn’t clear in the migration guide is how to
handle
8-bit software rendering with SDL2.

Currently I have a 8-bit off screen surface which the engine renders to.
Previously I would just call SDL_BlitSurface to bring it on screen and
then finally SDL_Flip. Now it seems I have to use
SDL_CreateTextureFromSurface, SDL_RenderCopy, SDL_RenderPresent, and
finally SDL_DestroyTexture. On the surface this seems a lot less
efficient, but it could very well be what SDL1 was doing the whole time
any
way since the performance seems to be roughly the same.

Is this the correct way to do things or is there a way I can reuse a
texture between frames? The other solutions I could think of are to
either
write my own 8-bit to 32-bit code and use a streaming texture, possibly
avoiding the SDL_Surface altogether. I could also blit to an
intermediate
surface to do the conversion and hope that the texture’s pitch matches
the
surface pitch and memcpy.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

But yeah, you’ll need to convert from 8-bit to 32-bit on the fly.

I was in the same boat with Postal 1; it always writes to a 640x480,
8-bit, paletted surface.

I had moved it from SDL 1.2 to SDL 2.0, using an 8-bit shadow surface,
which I would blit to the window surface. Postal uses dirty rects and
such, so it would do this reasonably efficiently, at least on the
application side of things.

This worked, but it used a lot of CPU on the Mac I tried it on. I hadn’t
debugged it or even sanity-checked the code to make sure I wasn’t doing
something stupid.

I decided to move it to OpenGL directly:

  • We keep a 640x480 memory buffer. The game renders into this, 8 bits
    per pixel, thinking it’s the usual SDL 1.2 video surface.
  • We also have a 640x480 GL_R8 texture, which we update once a frame
    with glTexSubImage2D() from that memory buffer. We don’t try to do dirty
    rectangles. At the end of the frame, the whole texture is updated,
    unconditionally, because it was easier this way.
  • We have an 256x1 GL_RGBA texture. It’s the palette.
  • There’s a VBO that has the vertex array we use to draw a single quad
    with this texture.
  • There’s GLSL to make it take the 8-bit texture and the palette
    texture, and render the proper colors to the framebuffer.
  • For extra credit, we have an FBO that we can then do a GL_LINEAR
    stretch-blit to the real window system framebuffer, so the game doesn’t
    have to be 640x480 anymore. Since the 8-bit values have to line up
    exactly for the palette to work, that has to render at 640x480 to get
    the right colors, but then you can do a nice linear filter when going to
    the screen. Now it looks like it would if you dropped your monitor’s
    resolution to 640x480 to fill the whole display, but the Steam Overlay
    still renders at full resolution.

This took a few hours to implement, got some cool benefits, and this Mac
uses about 4% of the CPU to play the game.

–ryan.

Just as an interesting aside, this is both how wine handles 8-bit
rendering in ddraw and how Microsoft’s new Age of Empires II HD game
works (though with d3d9). It’s a really cool technique, and it allows
you to change palettes easily and do some other cool tricks, too.

– DavidOn 08/04/13 08:26, Ryan C. Gordon wrote:

But yeah, you’ll need to convert from 8-bit to 32-bit on the fly.

I was in the same boat with Postal 1; it always writes to a 640x480,
8-bit, paletted surface.

I had moved it from SDL 1.2 to SDL 2.0, using an 8-bit shadow surface,
which I would blit to the window surface. Postal uses dirty rects and
such, so it would do this reasonably efficiently, at least on the
application side of things.

This worked, but it used a lot of CPU on the Mac I tried it on. I hadn’t
debugged it or even sanity-checked the code to make sure I wasn’t doing
something stupid.

I decided to move it to OpenGL directly:

  • We keep a 640x480 memory buffer. The game renders into this, 8 bits
    per pixel, thinking it’s the usual SDL 1.2 video surface.
  • We also have a 640x480 GL_R8 texture, which we update once a frame
    with glTexSubImage2D() from that memory buffer. We don’t try to do dirty
    rectangles. At the end of the frame, the whole texture is updated,
    unconditionally, because it was easier this way.
  • We have an 256x1 GL_RGBA texture. It’s the palette.
  • There’s a VBO that has the vertex array we use to draw a single quad
    with this texture.
  • There’s GLSL to make it take the 8-bit texture and the palette
    texture, and render the proper colors to the framebuffer.
  • For extra credit, we have an FBO that we can then do a GL_LINEAR
    stretch-blit to the real window system framebuffer, so the game doesn’t
    have to be 640x480 anymore. Since the 8-bit values have to line up
    exactly for the palette to work, that has to render at 640x480 to get
    the right colors, but then you can do a nice linear filter when going to
    the screen. Now it looks like it would if you dropped your monitor’s
    resolution to 640x480 to fill the whole display, but the Steam Overlay
    still renders at full resolution.

This took a few hours to implement, got some cool benefits, and this Mac
uses about 4% of the CPU to play the game.

–ryan.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

This whole discussion makes me think:
For software rendering, SDL 1.2 is still the best bet?
I mean, does the software rendering within SDL2 (eighter using SDL_CreateSoftwareRender or the software fall back of the SDL_CreateRender, not sure if there is differences) performs WORSE than the one in SDL 1.2? I really want to know the answer…

Cheers!------------------------
Rodrigo Cardoso Rocha
@RodrigoRodrigoR - twitter.com/RodrigoRodrigoR
Chibata Creations - chibatacreations.com

**
This whole discussion makes me think:
For software rendering, SDL 1.2 is still the best bet?
I mean, does the software rendering within SDL2 (eighter using
SDL_CreateSoftwareRender or the software fall back of the SDL_CreateRender,
not sure if there is differences) performs WORSE than the one in SDL 1.2? I
really want to know the answer…

My football games Eat the Whistle ( http://sourceforge.net/projects/etw/ )
has been migrated a few months ago to SDL 2.0 to go mobile (now there are
iOS and Android versions available), it uses internally a 8bit chunky pixel
surface, where all the gfx operations take place and then it’s blitted to
the screen.

In the SDL 1.2 version I had to keep the 8bit bitmap to the size of the
screen, or manually scale it while rendering.

The 2.0 target improved this since using SDL_RenderSetLogicalSize() I can
keep the size of the objects on the screen under control.

Here is a “cut & paste” of the screen initialization (error check omitted
for brevity):

screen = SDL_CreateWindow(“ETW”/-/, SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED, 0, 0, SDL_WINDOW_BORDERLESS |
SDL_WINDOW_FULLSCREEN_DESKTOP | SDL_WINDOW_SHOWN);
renderer = SDL_CreateRenderer(screen, -1, 0);
SDL_RenderSetLogicalSize(renderer, WINDOW_WIDTH, WINDOW_HEIGHT);

To get some speed on slower devices I use 16bit depth textures:

screen_texture = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_RGB565,
SDL_TEXTUREACCESS_STREAMING, WINDOW_WIDTH, WINDOW_HEIGHT);

This is how I transfer my 8bit chunky pixel buffer to the texture:

void blitScreen16(uint16_t *dst)
{
uint8_t *src = main_bitmap;
int x, y;

for (y = bitmap_height; y > 0; y--)
for (x = bitmap_width; x > 0; x--)
*(dst++) = palette16[*src++];

}

And here is the display code:

if(!SDL_LockTexture(screen_texture, NULL, &pixels, &pitch)) {
    if (pitch == bitmap_width * 2)
        blitScreen16(pixels);
    else if (pitch == bitmap_width * 4)
        blitScreen32(pixels);
    else {
        D(bug("Unsupported pitch: %d (width %d)\n", pitch,

bitmap_width));
}

    SDL_UnlockTexture(screen_texture);
    SDL_RenderCopy(renderer, screen_texture, NULL, NULL);
    SDL_RenderPresent(renderer);

}

This is as fast in desktop OSes as what I was doing before (SDL_BlitSurface

  • SDL_Flip).On Mon, Apr 8, 2013 at 4:48 AM, RodrigoCard wrote:


Bye,
Gabry

1 Like

In your case, you almost have to use SDL 1.2. That isn’t to say that SDL 2 couldn’t be patched fairly easily to support palettes on textures.

Shy of doing this, the other best option is to use a streaming texture, but you’d still need to convert pixel format on the CPU every frame.------------------------
Nate Fries

Considering modern hardware doesn’t support palettes at all (VGA
compatibility aside, and even that is thrown away on UEFI systems),
I’d say that letting the software handle the palettes on its own is a
better option in the long term. It will avoid headaches later when
there isn’t any other option but to go true color.

Gabriele: how is the performance of that on phones? Just curious about
how much CPU it uses up. I assume the framebuffer is low resolution?

Oh, I just remembered: my game normally uses the GPU to do the
scaling, but there’s a switch that forces it to use the software
renderer. On desktop systems it’s still pretty much full framerate
(even on my mum’s crappy computer), on netbooks it seems to get tricky
though. I know because I accidentally swapped those at some point and
had to showcase the game on a netbook… Yeah, bad idea. Bonus for
debug SDL build (so no optimizations!). Though when the media player
in the background stopped playing it went back to 60FPS… Still, you
have been warned. Stick with the GPU for the final scaling if you can,
even integrated chips will be fast for something like this (it’s just
a quad after all!).

2013/4/8, Nathaniel J Fries :> In your case, you almost have to use SDL 1.2. That isn’t to say that SDL 2

couldn’t be patched fairly easily to support palettes on textures.

Shy of doing this, the other best option is to use a streaming texture, but
you’d still need to convert pixel format on the CPU every frame.


Nate Fries

Gabriele: how is the performance of that on phones? Just curious about
how much CPU it uses up. I assume the framebuffer is low resolution?

The framebuffer size is set by aspect ratio and device.

On an iphone is 480 x 320, on iphone5 is 568x320, on an ipad it’s 512 x
384, on an android device, it may be from 320 x 240 to 800 x 480.

I keep the aspect ratio correct and try to keep the objects big enough to
be easy distinguishable, if the scaling is not pixel perfect (like it is in
iOS) I use:

SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, “1”);

… to improve the quality.

Scaling is obviously done by the GPU.

I get a decent 30fps also on the lower end phone I’ve tried, an HTC hero,
iOS phones (3GS+) have no problem to reach 60fps.

Desktop speed is not an issue.

This game is born 15 years ago with a 14mhz amiga 1200 as target that has
to do at least 25fps in 320x256, the SDL portable port I made in y2k was a
bit heavier but it was able to achieve 50fps in 640x480 on a 90mhz pentium
(50fps was the frame cap being the game thought for pal displays).

Actually the game in my 8 year old linux desktop (P4Dual) take 5% of one
CPU in a 640x480 window with SDL2 backend.

Anyway here is the code of the video module, it’s a bit “dirty” but the
fact it has 15 years and the various porting layers made it a bit messy, I
have resisted the need to rewrite it because sadly I’ve no time to do it :slight_smile:

http://etw.svn.sourceforge.net/viewvc/etw/trunk/etw/os_video.c?revision=345&view=markup--
Bye,
Gabry

Looking at the rest of the replies this does indeed look like the way to
go. It’s kind of a shame that SDL2 doesn’t have a function to
automatically do the conversion, but obviously it’s not that difficult
to do manually.

I suppose this way I could also switch to using a column major off
screen buffer and potentially gain a little extra performance out of the
renderer while rendering walls and sprites. Not sure if the
floor/ceiling drawing code will simply negate this advantage though.

I do like Ryan’s suggestion to do the palette conversion in OpenGL,
however, I will still need the pure software fallback where OpenGL is
not available. When working on a game like Wolfenstein 3D, you’re kind
of expected to support old hardware. (Speaking of, does SDL2 drop
support for Windows 9x? Haven’t tried it out on Windows yet.)

With that said, due to issues with the input system in ECWolf, I’ll
still have to stick with SDL 1.2 for awhile, so I won’t really be able
to test performance of the various methods suggested.

  • Blzut3

Doesn’t Wolfstein 3D just render two filled rectangles for the floor
and ceiling? :stuck_out_tongue:

And I think SDL2 demands Windows XP minimum, but you may want to try,
maybe it still works on Windows 9x (albeit with some functionality not
being usable). It still supports DX9 so at least that part should be
covered for Windows 98 (95 is out of luck though, at least for audio
support).

2013/4/8, Braden Obrzut :> Looking at the rest of the replies this does indeed look like the way to

go. It’s kind of a shame that SDL2 doesn’t have a function to
automatically do the conversion, but obviously it’s not that difficult
to do manually.

I suppose this way I could also switch to using a column major off
screen buffer and potentially gain a little extra performance out of the
renderer while rendering walls and sprites. Not sure if the
floor/ceiling drawing code will simply negate this advantage though.

I do like Ryan’s suggestion to do the palette conversion in OpenGL,
however, I will still need the pure software fallback where OpenGL is
not available. When working on a game like Wolfenstein 3D, you’re kind
of expected to support old hardware. (Speaking of, does SDL2 drop
support for Windows 9x? Haven’t tried it out on Windows yet.)

With that said, due to issues with the input system in ECWolf, I’ll
still have to stick with SDL 1.2 for awhile, so I won’t really be able
to test performance of the various methods suggested.

  • Blzut3

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Vanilla Wolf3D does indeed just clear the frame buffer before drawing.
However, my source port, ECWolf ( http://maniacsvault.net/ecwolf/ ), is
designed to be a general purpose ray casting engine and supports full
texture mapping among many other graphical improvements. :stuck_out_tongue:

I suppose I could give SDL2 a try on Windows 98 and report back. I’m not
especially concerned about Windows 95 as it seems all the 9x hold offs
are using 98. Ultimately if SDL2 does drop 9x I may end up supporting
both SDL 1 and 2 and using SDL2 on 64-bit Windows builds or something to
keep things simple.

  • Blzut3

Well, as far as I know textured floors and ceilings are still done
vertically, so hopefully it shouldn’t be much of an issue… (really
this is a case where doing it horizontally or vertically is not going
to change much, cache locality aside)

2013/4/8, Braden Obrzut :> Vanilla Wolf3D does indeed just clear the frame buffer before drawing.

However, my source port, ECWolf ( http://maniacsvault.net/ecwolf/ ), is
designed to be a general purpose ray casting engine and supports full
texture mapping among many other graphical improvements. :stuck_out_tongue:

I suppose I could give SDL2 a try on Windows 98 and report back. I’m not
especially concerned about Windows 95 as it seems all the 9x hold offs
are using 98. Ultimately if SDL2 does drop 9x I may end up supporting
both SDL 1 and 2 and using SDL2 on 64-bit Windows builds or something to
keep things simple.

  • Blzut3

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Message-ID: <516311A7.2080900 at maniacsvault.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Vanilla Wolf3D does indeed just clear the frame buffer before drawing.
However, my source port, ECWolf ( http://maniacsvault.net/ecwolf/ ), is
designed to be a general purpose ray casting engine and supports full
texture mapping among many other graphical improvements. :stuck_out_tongue:

I suppose I could give SDL2 a try on Windows 98 and report back. I’m not
especially concerned about Windows 95 as it seems all the 9x hold offs
are using 98. Ultimately if SDL2 does drop 9x I may end up supporting
both SDL 1 and 2 and using SDL2 on 64-bit Windows builds or something to
keep things simple.

  • Blzut3

I doubt that Sam or Ryan are interested in providing official Win9x
support, but if SDL won’t work straight out of the box there, then I’m
sure they’d accept any patches that provide support for it (and it
shouldn’t be too hard, since at least some of what’s missing should be
grabbable from SDL 1.2).

As for OpenGL, have you tried using TinyGL, or something similar? I
don’t know that there are any software OpenGL implementations that
provide programmable shaders, but there is a TinyGL port to SDL
underway, so if nothing else you could write your own shader
abstraction so that you just use TinyCC to compile C shaders when you
can’t use proper ones.> Date: Mon, 08 Apr 2013 14:51:19 -0400

From: Braden Obrzut
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL2 and Software Rendering

I doubt that Sam or Ryan are interested in providing official Win9x
support, but if SDL won’t work straight out of the box there,

It almost certainly won’t, and I doubt the time and mess it would
require is worth it.

There is NO reason to be supporting any Windows older than XP in 2013
(and it’s sad that I can’t say “Vista” there, but oh well).

–ryan.

These days many programs won’t even support XP, they demand 7
(technically most of those demand Vista since that’s when the big API
additions were made, but that’s beyond the point). And Microsoft’s XP
support ends in 2014. That said, for Windows 98 there are hacks that
backport some of the XP new features that allow XP programs to run on
it, and if you’re still running 98 for non-legacy programs you
probably are using that.

What I wonder though, is SDL going to drop support for 32-bit Windows
if XP support is killed in the far future? I’m not sure that even
matters with the current code, but it’d be interesting to see. On XP
most people use 32-bit, but on 7 most people use 64-bit. In
particular, this’d affect which DirectX versions are available.

2013/4/8, Ryan C. Gordon :>

I doubt that Sam or Ryan are interested in providing official Win9x
support, but if SDL won’t work straight out of the box there,

It almost certainly won’t, and I doubt the time and mess it would
require is worth it.

There is NO reason to be supporting any Windows older than XP in 2013
(and it’s sad that I can’t say “Vista” there, but oh well).

–ryan.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org