YUV Overlays

Hello,
I’m trying to have the best performances using SDL, and I’m wondering about YUV
Overlays. Is it worth to use them instead of SDL_Surfaces? Are there cases
where they are faster? Can you eventually convert a surface to YUV overlay, or
do you always need to write the pixels manually? And what does exactly Y,U
and V stands for? (just can’t figure out) Sorry for that questions flood and
thanks for your time :slight_smile:

Alex.

“Alexandre Courbot” wrote

I’m trying to have the best performances using SDL, and I’m wondering
about YUV
Overlays. Is it worth to use them instead of SDL_Surfaces? Are there cases
where they are faster?

it seems many videocards support ‘video overlays’. the advantage they
have is they don’t need to be transferred into screen’s video buffer.
on systems that support overlays, i suppose you could do some things
quicker with them, but it would probably be more difficult and less
recommended?

Can you eventually convert a surface to YUV overlay, or
do you always need to write the pixels manually? And what
does exactly Y,U and V stands for? (just can’t figure out)

the YUV colorspace is popular for lossy video formats, since
you can just chop bits out of the color and still have it look
pretty good. (unlike RGB, where once you start taking bits out,
it looks quite bad). the “Y” in YUV represents the luminance (brightness)
value for each pixel. the “UV” part is two numbers representing
the chroma. the UV part is where you can strip out information.
(most YUV formats keep as much info in the Y as possible)
(many YUV formats share “UV” values across neighboring pixels)

the reason videocard overlays are in YUV space is for another
timesaver. since the video data is most likely in YUV format,
it saves CPU from converting to RGB colorspace.

converting from RGB to YUV is a once-time-lossy conversion.
that means if you convert back and forth between RGB and YUV
you will only lose the colordata the first time you go to YUV.
(this assumes your YUV is using less bits than your RGB data,
if a YUV format takes as many bits as RGB there should be no
loss (perhaps some minor minor shifting due to rounding errors?))

if it helps, you can think of YUV kind of like HSV. the "UV"
doesn’t work quite the same as “HS” but both describe color values.
the “Y” works like the “V”, both describing luminance (or brightness)

hopefully this makes for a passable intro to YUV!

Hi!On Wed, Oct 11, 2000 at 10:55:23AM -0700, Pete Shinners wrote:

the reason videocard overlays are in YUV space is for another
timesaver.

Which nevertheless raises the question why there is no support for RGB
overlays in SDL. There is at least one chip (Cirrus Logic GD5446) that can
do it (I think Matrox chips also support this) and it would provide windowed
environments with a display surface (AKA PIP) that has a fixed frame buffer
address and doesn’t need to be constantly refreshed, which is extremely
handy for emulators.

Bye,
Christian


/ Coding on PowerPC and proud of it
/ http://www.uni-mainz.de/~bauec002/

Hello,

I’m trying to integrate a SDL module in an existing project.
We are working with our own YUV buffers for all the image
transformations, and I’d like to use SDL to display the YUV buffers. (I
may not use the SDL YUV overlay buffers directly in my program)

My question is :
Do I need to copy my YUV buffers to the yuvoverlay->pixels buffers
(memcpy(yuvoverlay->pixels[0], ptrYbuffer,…) ; …)
or can I just copy the pointers (yuvoverlay->pixels[0] = ptrYbuffer;
…) ?

I tried both and both work.
Does simple pointer copy work in any case ?
How do I know when I can use simple pointer copy ?

Thanks,

David Ergo

Hello,

I’m trying to integrate a SDL module in an existing project.
We are working with our own YUV buffers for all the image
transformations, and I’d like to use SDL to display the YUV buffers. (I
may not use the SDL YUV overlay buffers directly in my program)

My question is :
Do I need to copy my YUV buffers to the yuvoverlay->pixels buffers
(memcpy(yuvoverlay->pixels[0], ptrYbuffer,…) ; …)
or can I just copy the pointers (yuvoverlay->pixels[0] = ptrYbuffer;
…) ?

You need to copy the buffers. The reason it appears to work is probably
because you aren’t getting hardware acceleration. As soon as hardware
acceleration works, the program will crash badly.

I’ll probably add a function equivalent to SDL_CreateRGBSurfaceFrom()
so you can specify your own software YUV surfaces, but they’ll never
be hardware accelerated.

See ya,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

Sam Lantinga wrote:

Hello,

I’m trying to integrate a SDL module in an existing project.
We are working with our own YUV buffers for all the image
transformations, and I’d like to use SDL to display the YUV buffers. (I
may not use the SDL YUV overlay buffers directly in my program)

My question is :
Do I need to copy my YUV buffers to the yuvoverlay->pixels buffers
(memcpy(yuvoverlay->pixels[0], ptrYbuffer,…) ; …)
or can I just copy the pointers (yuvoverlay->pixels[0] = ptrYbuffer;
…) ?

You need to copy the buffers. The reason it appears to work is probably
because you aren’t getting hardware acceleration. As soon as hardware
acceleration works, the program will crash badly.

I’ll probably add a function equivalent to SDL_CreateRGBSurfaceFrom()
so you can specify your own software YUV surfaces, but they’ll never
be hardware accelerated.

Yes … that would be a great function to have … maybe with hooks to the
already existing YUV conversion functions (_sw and _mmx).

I have the same problem here … I am trying to display the output of the
Project Mayo OpenDivX codec. The routine returns the Y, U and V planes
seperately.

I don’t want to use the YUV overlay functionality build into SDL for display
but create a 32bit surface instead that can be used for other things as
well.

Does anyone have C code to convert planar YUV to 32bit RGB? Please post or
reference.

Ciao
Andreas–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

I don’t want to use the YUV overlay functionality build into SDL for display
but create a 32bit surface instead that can be used for other things as
well.

What exactly do you mean?

Does anyone have C code to convert planar YUV to 32bit RGB? Please post or
reference.

You can use the code in SDL itself. It’s under the BSD license.
Remember, you lose all hardware acceleration when you restrict yourself
to software surfaces.

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

Sam Lantinga wrote:

I don’t want to use the YUV overlay functionality build into SDL for display
but create a 32bit surface instead that can be used for other things as
well.

What exactly do you mean?

Well, the YUV overlay routines seem to be tied to hardware accelerated display -
maybe I am wrong, but that what it looks like from the manual pages.

What if I don’t want to do that with the decoded image. Say, I want to use the
decoded YUV frame as an OpenGL texture or for some other image processing task.
So basically it would be nice to have a routine to create an RGB surface, just in
memory and without hardware acceleration, from YUV input data.

Does anyone have C code to convert planar YUV to 32bit RGB? Please post or
reference.

You can use the code in SDL itself. It’s under the BSD license.
Remember, you lose all hardware acceleration when you restrict yourself
to software surfaces.

Yes, I looked at that but couldn’t figure out how to use the internal YUV
routines.

Anyhow, I wrote a function that will work for the OpenDivX output from the Linux
decore() routine. If anyone can show me how to interface the SDL routines so I
can drop this code please do.

Ciao
Andreas

/*

Convert YUV planes (U and V are quarter size) to RGB32

*/

#define KcrR 76284
#define KcrG 53281
#define KcbG 25625
#define KcbB 132252
#define Ky 76284

int yuv2rgb32 (void *yi, void *ui, void *vi, int w, int h, void *rgb)
{
int tmp;
int i, j;
int y, crR, crG, cbG, cbB;
unsigned char *bufcr, *bufcb, *bufy, *bufoute, *bufouto;
int oskip, yskip;

oskip=w*4;
yskip=w;

bufy=(unsigned char *)yi;
bufcb=(unsigned char *)ui;
bufcr=(unsigned char *)vi;
bufoute=(unsigned char *)rgb;
bufouto=(unsigned char *)rgb;
bufouto += oskip;

for(i=0; i<(h>>1); i++)
{
for(j=0; j<w; j+=2)
{
crR=(*bufcr-128)KcrR;
crG=(
(bufcr++)-128)*KcrG;
cbG=(*bufcb-128)KcbG;
cbB=(
(bufcb++)-128)*KcbB;

y=(bufy[j]-16)*Ky;

*(bufoute++)=255;
tmp=(y+cbB)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y-crG-cbG)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y+crR)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);

y=(bufy[j+1]-16)*Ky;
*(bufoute++)=255;
tmp=(y+cbB)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y-crG-cbG)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y+crR)>>16;
*(bufoute++)=(tmp>255)?255:((tmp<0)?0:tmp);

y=(bufy[j+yskip]-16)*Ky;

*(bufouto++)=255;
tmp=(y+cbB)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y-crG-cbG)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y+crR)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);

y=(bufy[j+1+yskip]-16)*Ky;
*(bufouto++)=255;
tmp=(y+cbB)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y-crG-cbG)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);
tmp=(y+crR)>>16;
*(bufouto++)=(tmp>255)?255:((tmp<0)?0:tmp);

}
bufoute+=oskip;
bufouto+=oskip;
bufy+=yskip<<1;
}
}

The target surface is created as follows:

framesurface = SDL_CreateRGBSurface(SDL_SWSURFACE, dec_param.x_dim,
dec_param.y_dim, 32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);
/* Color space conversion into surface */

The actual call is this:

/* Decode frame */
dec_frame.length = framesize;
dec_frame.bitstream = framebuffer;
dec_frame.bmp = (void *) frameoutbuffer;
dec_frame.render_flag = 1;
decore(MY_APP_ID, 0, &dec_frame, NULL);
yuv2rgb32(frameoutbuffer[0],frameoutbuffer[1],frameoutbuffer[2],
dec_param.x_dim,dec_param.y_dim,framesurface->pixels);–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

Well, the YUV overlay routines seem to be tied to hardware accelerated display -
maybe I am wrong, but that what it looks like from the manual pages.

You can actually “display” YUV to any 16 or 32 bit RGB SDL surface, that’s
why the target surface is one of the parameters to the YUV overlay creation
call. The target parameter determines where the YUV data resides, in video
memory or in system memory, and sets up RGB mapping tables for fast decode.

Anyhow, I wrote a function that will work for the OpenDivX output from the Linux
decore() routine. If anyone can show me how to interface the SDL routines so I
can drop this code please do.

Any reason not to use that routine?

See ya,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

--------------48782BBDACE68353F5494C9C
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Sam Lantinga wrote:

Well, the YUV overlay routines seem to be tied to hardware accelerated display -
maybe I am wrong, but that what it looks like from the manual pages.

You can actually “display” YUV to any 16 or 32 bit RGB SDL surface, that’s
why the target surface is one of the parameters to the YUV overlay creation
call. The target parameter determines where the YUV data resides, in video
memory or in system memory, and sets up RGB mapping tables for fast decode.

Great, that I didn’t know.

Might be good to add a short example or comment regarding this to the documentation
as this is not apparent from the function name and its normal use as “Overlay” to
displays managed entirely on the video card.

Anyhow, I wrote a function that will work for the OpenDivX output from the Linux
decore() routine. If anyone can show me how to interface the SDL routines so I
can drop this code please do.

Any reason not to use that routine?

Nope! I’ll switch ASAP.

Ciao
Andreas–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

--------------48782BBDACE68353F5494C9C
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public “-//w3c//dtd html 4.0 transitional//en”>

Sam Lantinga wrote:
> Well, the YUV overlay routines seem to be tied to hardware accelerated display -
> maybe I am wrong, but that what it looks like from the manual pages.

You can actually "display" YUV to any 16 or 32 bit RGB SDL surface, that's
why the target surface is one of the parameters to the YUV overlay creation
call.  The target parameter determines where the YUV data resides, in video
memory or in system memory, and sets up RGB mapping tables for fast decode.
 

Great, that I didn't know.

Might be good to add a short example or comment regarding this to the documentation as this is not apparent from the function name and its normal use as "Overlay" to displays managed entirely on the video card.

 
> Anyhow, I wrote a function that will work for the OpenDivX output from the Linux
> decore() routine. If anyone can show me how to interface the SDL routines so I
> can drop this code please do.

Any reason not to use that routine?
 

Nope! I'll switch ASAP.

Ciao
Andreas

-- 
|  Andreas Schiffler                    aschiffler at home.com  |
|  Senior Systems Engineer    -    Deskplayer Inc., Buffalo  |
|  4707 Eastwood Cres., Niagara Falls, Ont  L2E 1B4, Canada  |
|  +1-905-371-3652 (private)  -  +1-905-371-8834 (work/fax)  |
 

--------------48782BBDACE68353F5494C9C–

Andreas Schiffler wrote:

Sam Lantinga wrote:

Well, the YUV overlay routines seem to be tied to hardware
accelerated display -
maybe I am wrong, but that what it looks like from the manual
pages.

You can actually “display” YUV to any 16 or 32 bit RGB SDL surface,
that’s
why the target surface is one of the parameters to the YUV overlay
creation
call. The target parameter determines where the YUV data resides,
in video
memory or in system memory, and sets up RGB mapping tables for fast
decode.

Great, that I didn’t know.

Might be good to add a short example or comment regarding this to the
documentation as this is not apparent from the function name and its
normal use as “Overlay” to displays managed entirely on the video
card.

Anyhow, I wrote a function that will work for the OpenDivX output
from the Linux
decore() routine. If anyone can show me how to interface the SDL
routines so I
can drop this code please do.

Any reason not to use that routine?

Nope! I’ll switch ASAP.

Hmm, doesn’t work as advertised. YUV decoding to the screen works like a
charm but YUV decoding to a pre-allocated software surface doesn’t work
at all.

Here are some code examples …

This doesn’t work (black screen):

framesurface = SDL_CreateRGBSurface(SDL_SWSURFACE, dec_param.x_dim,
dec_param.y_dim, 32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);

/* Create YUV overlay /
overlay = SDL_CreateYUVOverlay(dec_param.x_dim, dec_param.y_dim,
SDL_IYUV_OVERLAY, framesurface);

/
Output buffer is set to YUV pixels /
SDL_LockYUVOverlay(overlay);
decore(MY_APP_ID, 0, &dec_frame, NULL);
SDL_UnlockYUVOverlay(overlay);

/
Color space conversion into surface /
{ SDL_Rect srect, drect;
drect.x=0;
drect.y=0;
drect.w=dec_param.x_dim;
drect.h=dec_param.y_dim;
SDL_DisplayYUVOverlay(overlay, &drect);
}

/
Display decoded frame */
{ SDL_Rect srect, drect;
srect.x=0;
srect.y=0;
srect.w=dec_param.x_dim;
srect.h=dec_param.y_dim;
drect.x=0;
drect.y=0;
drect.w=640;
drect.h=480;
SDL_BlitSurface(framesurface,&srect, screen,&drect);
}

This does work:

/* Create YUV overlay /
overlay = SDL_CreateYUVOverlay(dec_param.x_dim, dec_param.y_dim,
SDL_IYUV_OVERLAY, screen);

/
Output buffer is set to YUV pixels /
SDL_LockYUVOverlay(overlay);
decore(MY_APP_ID, 0, &dec_frame, NULL);
SDL_UnlockYUVOverlay(overlay);

/
Color space conversion into surface */
{ SDL_Rect srect, drect;
drect.x=0;
drect.y=0;
drect.w=dec_param.x_dim;
drect.h=dec_param.y_dim;
SDL_DisplayYUVOverlay(overlay, &drect);
}
…–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

This doesn’t work (black screen):

framesurface = SDL_CreateRGBSurface(SDL_SWSURFACE, dec_param.x_dim,
dec_param.y_dim, 32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);

You’ve created a surface with an alpha channel, which probably isn’t
what you want. Set the alpha mask to 0. Is there any particular reason
you want an RGBA surface?

See ya,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

I’m trying to setup a couple of YUV overlays to display data from
video4linux devices. Could someone point me in the direction of
some sample code?

I can capture in YUV422,YUYV,UYVY,YUV420,YUY422P(planar),YUV411P(planar)
formats and xawtv reports the following modes as available YVY2,YV12,UXVY.

I’ve tried doing the following with different input/output formats:

char YUVptr; //data from bttv [XY*2]

SDL_LockYUVOverlay
memcpy(overlay->pixels[0],YUVptr,XY)
memcpy(overlay->pixels[1],YUVptr+X
Y,XY/2)
memcpy(overlay->pixels[2],YUVptr+X
Y+XY/2,XY/2)
SDL_DisplayYUVOverlay
SDL_UnlockYUVOverlay

pixels[0] seems to be fine, I can see a green image, but pixels[1-2] are
messed up ( streched, 2*1/2 width …).

What’s the most direct away of writing to an YUV overlay?? I was hoping
to be able capture directly into the overlay without reorganizing the data
:frowning:

How can I use multiple YUV ovelays without them fickering? Using a
single YUV overlay seems fine.

In mpeg4ip, we have a few yuv utilities, plus our player
uses yuv.

http://www.mpeg4ip.net

Try utils/yuv/yuvdump.cpp

“Robert D.” wrote:>

I’m trying to setup a couple of YUV overlays to display data from
video4linux devices. Could someone point me in the direction of
some sample code?

I can capture in YUV422,YUYV,UYVY,YUV420,YUY422P(planar),YUV411P(planar)
formats and xawtv reports the following modes as available YVY2,YV12,UXVY.

I’ve tried doing the following with different input/output formats:

char YUVptr; //data from bttv [XY*2]

SDL_LockYUVOverlay
memcpy(overlay->pixels[0],YUVptr,XY)
memcpy(overlay->pixels[1],YUVptr+X
Y,XY/2)
memcpy(overlay->pixels[2],YUVptr+X
Y+XY/2,XY/2)
SDL_DisplayYUVOverlay
SDL_UnlockYUVOverlay

pixels[0] seems to be fine, I can see a green image, but pixels[1-2] are
messed up ( streched, 2*1/2 width …).

What’s the most direct away of writing to an YUV overlay?? I was hoping
to be able capture directly into the overlay without reorganizing the data
:frowning:

How can I use multiple YUV ovelays without them fickering? Using a
single YUV overlay seems fine.

Hi all,
I’ve been fooling around with YUV overlays under X11(4.1), Linux(Redhat
7.1), SDL (1.2.2)
All works fine, except that when I do an

SDL_UpdateRect(screen,0,0,0,0);

The overlay disappears. I’ve verified that I am indeed getting a hardware
YUV buffer. I’m able to load it and display the output fine. It’s as soon
as I start using the above function it disappears. It seems like the z
buffer values are wrong. If I move the display or force it to update
quickly, I can see the YUV overlay for a split second. Anyone has any idea
what’s going on. I should also note that I’m using the nvidia drivers.

Another question while I’m at it. How do I detect when the application
window has moved. I can tell if it’s resized, minimized, resized, etc, just
not when it is moved. Thanks for nay input.

Calvin…_________________________________________________________________
Chat with friends online, try MSN Messenger: http://messenger.msn.com

Hi all,
I’ve been fooling around with YUV overlays under X11(4.1), Linux(Redhat
7.1), SDL (1.2.2)
All works fine, except that when I do an

SDL_UpdateRect(screen,0,0,0,0);

The overlay disappears. I’ve verified that I am indeed getting a hardware
YUV buffer. I’m able to load it and display the output fine. It’s as soon
as I start using the above function it disappears. It seems like the z
buffer values are wrong. If I move the display or force it to update
quickly, I can see the YUV overlay for a split second. Anyone has any idea
what’s going on. I should also note that I’m using the nvidia drivers.

There’s no Z buffer in YUV / RGB 2D screen display.
What’s happening is that the XImage code doesn’t have any idea where the
overlay is, and overwrites the screen with the SDL screen surface contents.
There are two things you can do. You can either have SDL draw the overlay
on the SDL screen surface, which eliminates the hardware acceleration, or
you can track the position of the overlay yourself (iff you have hardware
overlays) and then explicitly don’t update that area of the screen surface.

Another question while I’m at it. How do I detect when the application
window has moved. I can tell if it’s resized, minimized, resized, etc, just
not when it is moved. Thanks for nay input.

There’s no way at the moment. You’ll have to go in to the system specific
parts of SDL and add it.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

I think I have a handle on this overlay stuff now. I was wondering is there
a way to clear the screen where the overlay was over written so the overlay
can show through again? I’ve been researching and it seems to be some shade
of blue but I can’t quite seem to put my finger on it. I’m trying to
superimpose some text on an image/video any ideas? Am I going about this
the wrong way?

Thanks,
Calvin>From: Sam Lantinga

Reply-To: sdl at libsdl.org
To: sdl at libsdl.org
Subject: Re: [SDL] YUV Overlays
Date: Mon, 15 Apr 2002 10:45:39 -0700

Hi all,
I’ve been fooling around with YUV overlays under X11(4.1), Linux(Redhat
7.1), SDL (1.2.2)
All works fine, except that when I do an

SDL_UpdateRect(screen,0,0,0,0);

The overlay disappears. I’ve verified that I am indeed getting a
hardware
YUV buffer. I’m able to load it and display the output fine. It’s as
soon
as I start using the above function it disappears. It seems like the z
buffer values are wrong. If I move the display or force it to update
quickly, I can see the YUV overlay for a split second. Anyone has any
idea
what’s going on. I should also note that I’m using the nvidia drivers.

There’s no Z buffer in YUV / RGB 2D screen display.
What’s happening is that the XImage code doesn’t have any idea where the
overlay is, and overwrites the screen with the SDL screen surface contents.
There are two things you can do. You can either have SDL draw the overlay
on the SDL screen surface, which eliminates the hardware acceleration, or
you can track the position of the overlay yourself (iff you have hardware
overlays) and then explicitly don’t update that area of the screen surface.

Another question while I’m at it. How do I detect when the application
window has moved. I can tell if it’s resized, minimized, resized, etc,
just
not when it is moved. Thanks for nay input.

There’s no way at the moment. You’ll have to go in to the system specific
parts of SDL and add it.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Join the world?s largest e-mail service with MSN Hotmail.
http://www.hotmail.com

I think I have a handle on this overlay stuff now. I was wondering is there
a way to clear the screen where the overlay was over written so the overlay
can show through again? I’ve been researching and it seems to be some shade
of blue but I can’t quite seem to put my finger on it. I’m trying to
superimpose some text on an image/video any ideas? Am I going about this
the wrong way?

Just don’t “clear” it in the first place. I’m not sure how you can refresh
it, short of destroying and recreating the overlay entirely.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment