Hi,

So, I’m working on a linear resampling program. It involves taking weighted

samples from a surface’s pixels. My basic issue is scaling a sample taken

in Uint32 format.

Uint32 samples, as I understand it, are a single number that can store the

same number of possible colors as four Uint8 samples. The result is that

the entire color of a single pixel can be stored as one Uint32 number:

Uint32 blah = Red*256*256*256 + Green*256*256 + Blue*256 + Alpha*1
or
Uint32 blah = Blue*256

*256*256 + Green

*256*256 + Red

*256 + Alpha*1

First, is that correct? Second, because of the distributive property, it

should be possible to scale the color simply by scaling the number:

0.2*blah == 0.2*Red*256*256*256 + 0.2*Green*256*256 + 0.2*Blue*256 +

0.2*Alpha*1

The results I’m getting are not consistent with this model. The entire

color changes, not just the intensity of each channel. I’m probably doing

something wrong. But what? And is there a better, preferred way to scale

samples?

Thanks,

Ian