Reading a bunch of pixels's alpha from a PNG file using SDL_Image? (OpenGL+SDL & Linux)

Hey!

Could somebody assist me with making a code that can read a 10x10 (or any
other dimension) region and determine the alpha of each pixel? I’m trying to
use this code but I always seem to get wrong data >.<! according to the main
code, I should get something like this:

(Swap the 1’s by 255. I just did it here with 1’s to make it clearer)

0 0 0 0 1 0 0 0 0
0 0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 1

But instead I get weird stuff like this:

128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255

Which, even if they’re alpha values, they aren’t the right ones in the
positions I’m requesting. Any help would be GREATLY appreciated!

  • DARKGuy

P.D.: Code goes below.------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

int main()
{

Terrain.Load("terrain.png");

Uint32 pixel;
int px,py;
Uint8 red,green,blue,alpha2;

SDL_LockSurface(Terrain.pix[0]);

for(py = 200; py <= 210; py++){
    for(px = 200; px <= 230; px++){
        pixel = getpixel(Terrain.pix[0], px, py);
        SDL_GetRGBA(pixel, Terrain.pix[0]->format,

&red,&green,&blue,&alpha2);
printf("%u “,alpha2);
}
printf(”\n");
}
SDL_UnlockSurface(Terrain.pix[0]);

exit(0);

}


FUNCTIONS

Uint32 getpixel(SDL_Surface surface, int x, int y)
{
int bpp = surface->format->BytesPerPixel;
/
Here p is the address to the pixel we want to retrieve */
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp / 8;
switch(bpp) {
case 1:
return *p;
case 2:
return *(Uint16 *)p;
case 3:
if(SDL_BYTEORDER == SDL_BIG_ENDIAN)
return p[0] << 16 | p[1] << 8 | p[2];
else
return p[0] | p[1] << 8 | p[2] << 16;
case 4:
return *(Uint32 )p;
default:
return 0; /
shouldn’t happen, but avoids warnings */
}
}

Terrain.pix[0] is generated from this:

void Texture::Load(const char filename[]){
    GLuint texture;
    SDL_Surface * image2 = IMG_Load(filename);
    SDL_Surface * imgFile = SDL_DisplayFormatAlpha(image2);

    width = (GLfloat)image2->w;
    height = (GLfloat)image2->h;

    imgF = image2;   // here. Look below for further info

    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);

    if(imgFile->format->BitsPerPixel == 32 ||

imgFile->format->BitsPerPixel == 24){
glTexImage2D(GL_TEXTURE_2D, 0, 4, imgFile->w, imgFile->h, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imgFile->pixels);
} else { glTexImage2D(GL_TEXTURE_2D, 0, 4, imgFile->w, imgFile->h,
0, GL_RGBA, GL_UNSIGNED_BYTE, imgFile->pixels); }

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}

imgF is this:

class Texture {
   public:
        GLuint id;
        GLfloat width, height;
        SDL_Surface * imgF;
      void Load(const char filename[]);
};

and pix[0] is this:

class Sprite {
    public:
            ...
        vector <SDL_Surface*> pix;
};

When I create a sprite, I do this:

int Sprite::Load(const char texfilename[]) {
    Texture tex;
    tex.Load(texfilename);
    img.push_back(tex.id);
    width.push_back(tex.width);
    height.push_back(tex.height);
    alpha.push_back(1.0f);
    pix.push_back(tex.imgF);
    animImg=0;
    animCounter=0;
    angle=0;
    Mirror=false;
    Flip=false;
    return img.size()-1;
}

I hate to bump but… nobody knows? :(On 5/2/07, DARKGuy . <@DARKGuy> wrote:

Hey!

Could somebody assist me with making a code that can read a 10x10 (or any
other dimension) region and determine the alpha of each pixel? I’m trying to
use this code but I always seem to get wrong data >.<! according to the main
code, I should get something like this:

(Swap the 1’s by 255. I just did it here with 1’s to make it clearer)

0 0 0 0 1 0 0 0 0
0 0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 1

But instead I get weird stuff like this:

128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255
128 170 255 128 170 255 128 170 255

Which, even if they’re alpha values, they aren’t the right ones in the
positions I’m requesting. Any help would be GREATLY appreciated!

  • DARKGuy

P.D.: Code goes below.


int main()
{

Terrain.Load("terrain.png");

Uint32 pixel;
int px,py;
Uint8 red,green,blue,alpha2;

SDL_LockSurface(Terrain.pix[0]);

for(py = 200; py <= 210; py++){
    for(px = 200; px <= 230; px++){
        pixel = getpixel(Terrain.pix[0], px, py);
        SDL_GetRGBA(pixel, Terrain.pix[0]->format,

&red,&green,&blue,&alpha2);
printf("%u “,alpha2);
}
printf(”\n");
}
SDL_UnlockSurface(Terrain.pix[0]);

exit(0);

}


FUNCTIONS


Uint32 getpixel(SDL_Surface surface, int x, int y)
{
int bpp = surface->format->BytesPerPixel;
/
Here p is the address to the pixel we want to retrieve */
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp /
8;
switch(bpp) {
case 1:
return *p;
case 2:
return *(Uint16 *)p;
case 3:
if(SDL_BYTEORDER == SDL_BIG_ENDIAN)
return p[0] << 16 | p[1] << 8 | p[2];
else
return p[0] | p[1] << 8 | p[2] << 16;
case 4:
return *(Uint32 )p;
default:
return 0; /
shouldn’t happen, but avoids warnings */
}
}

Terrain.pix[0] is generated from this:

void Texture::Load(const char filename[]){
    GLuint texture;
    SDL_Surface * image2 = IMG_Load(filename);
    SDL_Surface * imgFile = SDL_DisplayFormatAlpha(image2);

    width = (GLfloat)image2->w;
    height = (GLfloat)image2->h;

    imgF = image2;   // here. Look below for further info

    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);

    if(imgFile->format->BitsPerPixel == 32 ||

imgFile->format->BitsPerPixel == 24){
glTexImage2D(GL_TEXTURE_2D, 0, 4, imgFile->w, imgFile->h, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imgFile->pixels);
} else { glTexImage2D(GL_TEXTURE_2D, 0, 4, imgFile->w, imgFile->h,
0, GL_RGBA, GL_UNSIGNED_BYTE, imgFile->pixels); }

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}

imgF is this:

class Texture {
   public:
        GLuint id;
        GLfloat width, height;
        SDL_Surface * imgF;
      void Load(const char filename[]);
};

and pix[0] is this:

class Sprite {
    public:
            ...
        vector <SDL_Surface*> pix;
};

When I create a sprite, I do this:

int Sprite::Load(const char texfilename[]) {
    Texture tex;
    tex.Load(texfilename);
    img.push_back(tex.id);
    width.push_back(tex.width);
    height.push_back(tex.height);
    alpha.push_back(1.0f);
    pix.push_back (tex.imgF);
    animImg=0;
    animCounter=0;
    angle=0;
    Mirror=false;
    Flip=false;
    return img.size()-1;
}

I’m not on a system where I can test your code, but various parts of
it look right. I would be more suspicious of code that you may have
omitted, for example does Texture have a destructor which calls
SDL_FreeSurface?

Why doesn’t Sprite use a std::vector rather than what looks
like parallel arrays of texture ids, widths, heights and SDL_Surfaces.
If you make Texture with a proper copy constructor and destructor this
will work (but may involve a lot of unnecessary surface coping
operations, or alternatively you could use a smart pointer like
boost::shared_ptr in your Sprite’s vector (this approach will
allow you to only load an image once and share it among lots of
sprites if you’re smart, so may be well worth the effort involved).

As a test, try move the alpha printing code from main() to
Texture::Load() and see if it works as expected there.On 5/3/07, DARKGuy . <dark.guy.2008 at gmail.com> wrote:

I hate to bump but… nobody knows? :frowning:

On 5/2/07, DARKGuy . <dark.guy.2008 at gmail.com> wrote:

Hey!

Could somebody assist me with making a code that can read a 10x10 (or any
other dimension) region and determine the alpha of each pixel? I’m trying to
use this code but I always seem to get wrong data >.<! according to the main
code, I should get something like this:

You divide bytes per pixel by 8 in getpixel.

-gOn Thu, 03 May 2007 01:16:05 +0200, DARKGuy . <dark.guy.2008 at gmail.com> wrote:

I hate to bump but… nobody knows? :frowning:

Other than that, the code is line for line the exact getpixel() in the
SDL doc wiki… odd. Did you manually edit that code Darkguy, or did
you get it as-is from somewhere else (if so, where)?On 5/3/07, Gerry JJ wrote:

On Thu, 03 May 2007 01:16:05 +0200, DARKGuy . <dark.guy.2008 at gmail.com> wrote:

I hate to bump but… nobody knows? :frowning:

You divide bytes per pixel by 8 in getpixel.

That is what the SDL wiki says to do though, if I get bpp as 8, 16, 24, 32
instead of 1, 2, 4, 8.On 5/3/07, Gerry JJ wrote:

On Thu, 03 May 2007 01:16:05 +0200, DARKGuy . <@DARKGuy> wrote:

I hate to bump but… nobody knows? :frowning:

You divide bytes per pixel by 8 in getpixel.

-g


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Nope, I didn’t edit it at all :(On 5/3/07, Brian <brian.ripoff at gmail.com> wrote:

Other than that, the code is line for line the exact getpixel() in the
SDL doc wiki… odd. Did you manually edit that code Darkguy, or did
you get it as-is from somewhere else (if so, where)?

On 5/3/07, Gerry JJ wrote:

On Thu, 03 May 2007 01:16:05 +0200, DARKGuy . <@DARKGuy> wrote:

I hate to bump but… nobody knows? :frowning:

You divide bytes per pixel by 8 in getpixel.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Actually, I figured out the problem and thanks to Gerry JJ for pointing it
out! it was weird though, since that’s what the SDL wiki tells to do… but
yes - once I removed the “bpp / 8” sentence in getpixel(), it worked
perfectly!!

Thanks a BUNCH guys!!! :smiley: :smiley: :smiley:

P.D.: If any of you want to check my game forum thread, it’s here :stuck_out_tongue: ->
http://ubuntuforums.org/showthread.php?t=427011