SDL_RenderPresent software renderer

Hello, a question on the software renderer

As explained on the wiki https://wiki.libsdl.org/SDL_RenderPresent :

The backbuffer should be considered invalidated after each present
, do not assume that previous contents will exist between frames.

Is this also true for the software renderer ?

Thanks.

Probably not, but I’d say you should still treat it like so
(especially since there’s no reason to make your code rely on using
a given renderer under normal circumstances, the whole point is to
avoid making assumptions).

Any reason for making your code rely on the software renderer quirks
and explicitly refuse to use the hardware accelerated ones when
available?

Also, since the software renderer doesn’t explicitly specify this behavior,
there’s no reason it couldn’t change in the future and invalidate your
code’s assumptions.

Jonny DOn Sat, Oct 31, 2015 at 1:04 PM, Sik the hedgehog < sik.the.hedgehog at gmail.com> wrote:

Probably not, but I’d say you should still treat it like so
(especially since there’s no reason to make your code rely on using
a given renderer under normal circumstances, the whole point is to
avoid making assumptions).

Any reason for making your code rely on the software renderer quirks
and explicitly refuse to use the hardware accelerated ones when
available?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hello,

I am not refusing hardware renderers, also have no real need for this
, just exploring SDL’s options.

While testing the software renderer i noticed this little ‘quirk’
, and if was true, this could reduce memory load and blits in my project.
Assuming people who are (forced?) using the software renderer are on low
end \ embedded \ weird machines…
, this could benefit SDL-users targeting such systems.

I can also understand why this quirk might be best left alone
, it is not compliant with the spirit of SDL2.

2015-10-31 18:04 GMT+01:00 Sik the hedgehog <sik.the.hedgehog at gmail.com>:> Probably not, but I’d say you should still treat it like so

(especially since there’s no reason to make your code rely on using
a given renderer under normal circumstances, the whole point is to
avoid making assumptions).

Any reason for making your code rely on the software renderer quirks
and explicitly refuse to use the hardware accelerated ones when
available?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2015-11-01 10:41 GMT-03:00, Marcel Bakker <mna.bakker at gmail.com>:

While testing the software renderer i noticed this little ‘quirk’
, and if was true, this could reduce memory load and blits in my project.

I can see how it’d reduce blits (at least until you have fullscreen
scrolling :P) but how would that reduce the memory load?

Assuming people who are (forced?) using the software renderer are on low
end \ embedded \ weird machines…

Some magnifiers don’t play nice with GPU-rendered images for whatever
reason (for users with those, the only option that works is the
software renderer), so it doesn’t have to be low end, just a
compatibility issue. This is why Sol lets you force using the software
renderer.

Now mind you, I’d argue that rendering to a smaller texture and
scaling up the final result could be way more helpful - some more
memory usage, but a lot less of pixels to render. The biggest downside
is that it’ll look blocky, but oh well :stuck_out_tongue: (when speed is more
important than accuracy, you’ll be OK with this)

I can see how it’d reduce blits (at least until you have fullscreen
scrolling :P) but how would that reduce the memory load?

In my case this an offscreen with dirty rectangles
, i could evade the allocation and full-screen software blit for the
offscreen.

Some magnifiers don’t play nice with GPU-rendered images for whatever
reason (for users with those, the only option that works is the
software renderer), so it doesn’t have to be low end, just a
compatibility issue. This is why Sol lets you force using the software
renderer.

Now mind you, I’d argue that rendering to a smaller texture and
scaling up the final result could be way more helpful - some more
memory usage, but a lot less of pixels to render. The biggest downside
is that it’ll look blocky, but oh well :stuck_out_tongue: (when speed is more
important than accuracy, you’ll be OK with this)

Informative, thanks.