Well, I remember glSDL (SDL 1.2 2D API implementation over OpenGL)
performing a lot better with 256x256 textures for mysterious reasons,
despite all the glSDL tiling overhead - but that was about a decade
ago! I suspect those cards didn’t actually support anything above
256x256 in hardware, so it might just have been that glSDL tiling
(trivial) was much faster than tiling in the GL driver (rather
complex)…
More generally speaking; whether it’s SDL clipping rectangles or
GL/D3D clipping polygons, those are trivial operations. No off-screen
pixels (or texels ) are even looked at, so any attempts at eliminating
them on the application level is most likely just going to double the
(minor) clipping/culling overhead that’s invariably in SDL and/or the
driver already.
Now, if you have something like hundreds or thousands of tiles or
sprites on a scrolling map, culling the off-screen ones as early as
possible in your engine can obviously be a big win, as you completely
avoid a lot of code and API call overhead. That’s a different issue,
though.On Tue, Feb 18, 2014 at 4:18 AM, mattbentley wrote:
Hi there-
quick question-
for large images (presuming they fit within a given renderer’s max texture
dimensions), would you get better performance
(a) from breaking down the image into screen-dimension-sized fragments and
blitting those to the screen as needed?
or
(b) keeping the image as one singular image, just blitting to the screen and
letting renderer sort out the areas which’re actually displayed?
or
© keeping the image as one singular image, and doing your own cropped blit
to the screen?
I’m guessing the answer is both renderer-and-driver dependent, and possibly
device-dependent, but I’d like best-guesses here.
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’