Message-ID: <1389393506.m2f.41438 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”
Jonas Kulla wrote:
Hmm. The final pipeline (when doing hardware accelerated drawing) is that
of the backing
implementation that you end up using, be that OpenGL/Direct3D/.
Thanks. That makes it more clear. So SDL2 supports different pipelines and
if I get it right - it
selects the one that should be used automatically. Is there a way to select
the pipeline myself?
Yes.
I mean there should be for sure, but can I get a list of those and their
components (steps)
from somewhere?
You can get a list of the available backends at runtime from SDL2,
look at the wiki (I have some things to do, so I’m not going to gather
up links this time). As a general rule the software backend SHOULD
always be available; and either DirectX, 1+ variants of OpenGL, or a
mixture of those two, will ALMOST always be available. You need to get
the list before allocating the renderer in order to actually benefit
from it, but once you have the info you can use it to indicate the
correct backend.
I doubt that I can query SDL2 itself (ideal way if it was a
Python module with
command line interface), but perhaps some docs…
Certainly you can’t just get it to spit out a list of the backends on
your machine since it’s a dynamic library, but I’d be surprised if
there wasn’t a tool stuffed somewhere to list your machine’s backends.
Even if one of those doesn’t exist either, it should still be fairly
simple: just print a copy of the list into a file.
I understand that SDL hides these details, but shouldn’t I care about
pipeline details to make
my application work fast?
SDL doesn’t really try to hide stuff too hard, it tries more to even
out platform variances so that a given piece of code can run almost
identically in multiple environments. This can accidentally result in
details being hidden, but it’s not really intentional (unless those
are details of SDL’s implementation, in which case they might change
unexpectedly).
It also helps to understand how SDL2 can be applied better
(like, can I extend
SDL with more pipelines for my imaginary embedded system).
This could be done, yes; however, you’d need to write at the very
least some glue code in C.
Jonas Kulla wrote:
Generally, you load an image from disk (eg. with SDL2_image) and it ends
up in a surface,
which is just a chunk of pixels in RAM with descriptions of the format
etc. From that
you create a texture, which is an image managed by the 3D implementation
(and so in 99%
of cases will end up in VRAM), which is then drawn to the screen using 4
textured vertices.
If you want to apply effects via direct pixel access in RAM, you will have
to go the slow
route of manipulating your surface and reuploading it to your texture.
If I want to program my own 2D graphics effect from scratch, I thought the
fastest way is to
change values in chunk of pixels in RAM directly. Why can’t I write them to
VRAM directly?
Because this isn’t the 80s and early 90s, so 99 times out of 100
you’ll have to contend with a GPU. That having been said, there is a
modern way to do what you’re talking about, but it requires using
“shader programs”, and SDL doesn’t provide an abstraction for those
(an abstraction would basically require that a translating compiler be
included as part of SDL2: not happening). For shaders you need to
customize for the underlying target, while realizing that sometimes
the underlying target just won’t support them at all.
Also, what is the format of this chunk of pixels?
It varies. For SDL’s surfaces there are multiple formats supported.
For video memory it’ll MOSTLY depend on the graphics card (even the
actual GPU processor might not be the deciding factor, if a
manufacturer customized their drivers and/or firmware).
I would like to use
old-school VGA palette,
because drawing algorithm uses values 0…255 for every color, not RGBA
(which I think is the
format for textures). Where on pipeline it is possible to go from palette
→ RGBA?
This can happen at multiple places. SDL’s texture code can do the job
if I remember right, and Ryan once or twice posted some shader code to
do the same job.
Should I
care about this or this step is better to leave it accelerated by SDL?
Depends on how you feel like implementing it.
Does
it accelerate
pixel format conversion? Does it use special CPU commands for that?
It MIGHT use some optimized code for the process, but communicating
with a graphics card is normally restrained by the speed of the
PCI/AGP/PCIe/etc. bus, instead of by the CPU speed, so this isn’t
likely to be important.
While trying to go from theory to practice I’ve come upon Renderer concept.
It is clearly
a part of pipeline, but I can’t completely get it. What is the role of the
Renderer in SDL2?
It wraps around the hairy details of the underlying accelerated
graphics system, so that you don’t have to deal with those details, OR
with disonances between different backends.
Jonas Kulla wrote:
The faster route would be to drop SDL2’s render API alltogether and just
use it to setup your
window / handle events, then use OpenGL for the actual drawing (where you
can use shader
programs that are executed directly on the GPU).
Do you mean that I can provide these “pre-rendered” texture vertices from
the start? Like
if I can set pixels in any format, I can move from
[surface] -> [texture] -> [vertices] -> [screen]
to just [vertices] -> [screen]
?
I am not sure that I want to go OpenGL way right now. I’ve tried to get how
it works with
Python pyglet
library, but the complexity appeared too high to munch on. I
just need to
put some 2D stuff to the screen. If FPS is high - it is a big bonus, but not
the goal. The goal
is improved user experience for developers and portability - see below.
If you don’t really care about FPS then you might try starting off
with the software renderer.
Jonas Kulla wrote:
Maybe you should try to explain why you need this information / what
you’re trying to do
under SDL2, and then we can help you accomplish that?
I want to program a demo (effect) in Python.
Is this a “demoscene” demo? I would assume software mode to be what
you REALLY want if so.> Date: Fri, 10 Jan 2014 22:38:26 +0000
From: “techtonik”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL2 pixel graphics pipeline
Date: Fri, 10 Jan 2014 22:56:10 +0000
From: “techtonik”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL2 pixel graphics pipeline
Message-ID: <1389394570.m2f.41439 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”
@Andreas: Regarding SDL_TEXTUREACCESS_STREAMING - is that “returned memory”
is just a pointer or the memory contents travels between real and video mem?
For the software render it’s (at least in the case of normal pixel
formats) a pointer to where the data already was, for accelerated it’s
best to assume that it had to be copied out over a data bus to get
into main memory, and will have to be copied back when you’re done.
There CAN BE exceptions, but they’re likely to come with reduced
performance.
While I do need previous pixel data, it is already in my memory buffer, so
won’t reading pixels back to overwrite affect performance?
Yes. If you’re keeping the data already then shun this data copy like
the plague, as AT BEST you’ll get a trivial O( 1 ) hit every time, and
at worst you’ll get a more meaningful slowdown.
@Jonny D: SDL_gpu looks interesting, but while I don’t completely understand
the role of SDL_Renderer, it is hard to understand more features of SDL_gpu
as well. Basically, because I fail to completely see Renderer pipeline, I
can not compare how SDL_gpu fits in and how it is worse or better.
SDL_Renderer implements a few basic operations using whatever native
interfaces that it can detect. It’s mostly useful for either very
basic uses (e.g. rendering a surface that you’ve already prepared to a
window), or as a way to start OpenGL or DirectX without having to put
any effort into it. SDL_gpu I haven’t touched.