When implementing a source alpha blit, is it better in general to use
a separate alpha surface (faster in software) or extract the alpha
information from each pixel (faster in hardware) ?
Hardware accelerated 2-D alpha blending is very rare at the moment.
Also, the semantics of the source alpha value are:
As alpha goes up, the amount of the source blended goes down.
right?
(I thought real alpha was a measure of opacity, not transparency)
I would say that you would want to support both because both will
be supported in upcoming hardware. Looking at the PC industry,
which is a good baseline IMO, cards in 1998/1999 timeframe will
support both channeled alpha blits (alpha per pixel) and uniform
alpha blits (one alpha component per surface) because Microsoft
will be sticking this in DirectDraw and GDI2k. Also, there are
times when you want to use perpixel and there are times when you
want to use persurface alpha even if it is harder/slower in
software. You may also want to think about whether you need to
do anything special to “support” premultiplied alpha and non-
premultiplied alpha.
Word,
Paul
At 11:10 AM 4/14/98 -0700, you wrote:>When implementing a source alpha blit, is it better in general to use
a separate alpha surface (faster in software) or extract the alpha
information from each pixel (faster in hardware) ?
Hardware accelerated 2-D alpha blending is very rare at the moment.
Also, the semantics of the source alpha value are:
As alpha goes up, the amount of the source blended goes down.
right?
(I thought real alpha was a measure of opacity, not transparency)
support both channeled alpha blits (alpha per pixel) and uniform
alpha blits (one alpha component per surface)
uniform alpha blits are easy to support.
My question was more whether to support channeled alpha blits using
the alpha component of each pixel or to use a separate alpha surface.
Probably the right answer is “both”, but if I had to support one now,
which would be most useful and likely to have the best longevity?
You may also want to think about whether you need to
do anything special to “support” premultiplied alpha and non-
premultiplied alpha.
I thought that the alpha component had to be multiplied by both
the source and destination pixels. ??
My question was more whether to support channeled alpha blits using
the alpha component of each pixel or to use a separate alpha surface.
Whoops. Misread your question…
Probably the right answer is “both”, but if I had to support one now,
which would be most useful and likely to have the best longevity?
I would say you only need to support an alpha component per pixel.
I’ve never heard of anyone using a separate alpha plane. I am sure
it is possible, but what are some uses of this?
I thought that the alpha component had to be multiplied by both
the source and destination pixels. ??
Premultiplied alpha hust means that the RGB components of the pixels
have already been multiplied by the A component. By rearranging the
alpha compositing operation, premultiplied surfaces save some
’rendering-time’ multiplies. The blending operations available are
somewhat limited. For an introduction to some of the issues of alpha
and alpha blending, check out:
These are two technotes written by Alvy Ray Smith about alpha and
digital compositing. I would recommend anyone check out these
technotes. Some of the info is obvious, but it is always nice to
freshen up. There are some others in that directory that are
interesting too. (There are postscript versions in alvy/PS or
something.
When implementing a source alpha blit, is it better in general to use
a separate alpha surface (faster in software) or extract the alpha
information from each pixel (faster in hardware) ?
Extracting the alpha is always faster on the PPC, because it doesn’t
thrash the cache as much. I expect the same will hold on Pentiums,
because although they cant bitfield extract to save themselves
Hardware accelerated 2-D alpha blending is very rare at the moment.
Not with 3D cards
Also, the semantics of the source alpha value are:
As alpha goes up, the amount of the source blended goes down.
right?
(I thought real alpha was a measure of opacity, not transparency)
That is a windowing system with real-time dragging, scalling and alpha
channels(+ lots of other groovy things) that a friend and I wrote last
semester. We took the meaning of alpha channel as a measure of the amount
of ink on clear sheets(of virtual acetate !), 0 = solid, 255 =
transparent. That way combination is simply multiplication.
You might try porting N(or h, as it became known… to SDL, as it
proveds a nifty game windoing environment for titles etc(it even has fully
antialiased truetype fonts). Or, I might port it…
njhOn Tue, 14 Apr 1998, Sam Lantinga wrote:
To unsubscribe, e-mail: penguinplay-unsubscribe at sunsite.auc.dk
For additional commands, e-mail: penguinplay-help at sunsite.auc.dk
You may also want to think about whether you need to
do anything special to “support” premultiplied alpha and non-
premultiplied alpha.
I thought that the alpha component had to be multiplied by both
the source and destination pixels. ??
Well, yes, and no. If the alpha mask is going to be constant, and the
dest is going to be constant, then you can prescale the dest and save
yourself a multiply per pixel.
btw: I’ve seen three people all publish
d = a*d + (1-a)*s
for their alpha blenders, but simple algebra gives:
d = a*(d - s) + s
but you knew that…
njhOn Tue, 14 Apr 1998, Sam Lantinga wrote:
To unsubscribe, e-mail: penguinplay-unsubscribe at sunsite.auc.dk
For additional commands, e-mail: penguinplay-help at sunsite.auc.dk
Hardware accelerated 2-D alpha blending is very rare at the moment.
Not with 3D cards
Actually it is. They can do alpha vertex, but not alpha pixels.
You might try porting N(or h, as it became known… to SDL, as it
proveds a nifty game windoing environment for titles etc(it even has fully
antialiased truetype fonts). Or, I might port it…
When implementing a source alpha blit, is it better in general to use
a separate alpha surface (faster in software) or extract the alpha
information from each pixel (faster in hardware) ?
Hardware accelerated 2-D alpha blending is very rare at the moment.
Also, the semantics of the source alpha value are:
As alpha goes up, the amount of the source blended goes down.
right?
(I thought real alpha was a measure of opacity, not transparency)
When implementing a source alpha blit, is it better in general to use
a separate alpha surface (faster in software) or extract the alpha
information from each pixel (faster in hardware) ?
Hardware accelerated 2-D alpha blending is very rare at the moment.
Also, the semantics of the source alpha value are:
As alpha goes up, the amount of the source blended goes down.
right?
(I thought real alpha was a measure of opacity, not transparency)