[…]
Alter the percentages accordingly for other shapes
pieces, e.g.:
|
| |
| |
|/
I think the main thing I was unsure of was how to
decide which pixels or vertical slices you should omit
from the image (If you’re scaling down say). I’ll
check out your example anyway.
That’s the really, REALLY hard part, and is the reason why basically
the only way to create really good looking low resolution graphics is
(still) to do it by hand - that is, Pixel Art. Though you can come up
with amazingly smart algorithms, the problem is that “good” and “bad”
is decided, somewhat subjectively, by the (human) viewer.
In short, the best scaling method is the one that produces a result
that is most true to the original image.
The bad news is that there is no single, catch-all, truly correct
algorithm, or even a single correct definition of the problem space.
For example, if you’re doing distance/perspective transforms, you
play by different rules than you are when making tiny, fixed size
versions of icons. The latter allows you to make details look better
by modifying the contents of the image (as in reducing the number of
keys when scaling down an image of a keyboard), whereas the former
does not.
Anyway, the simplest way is to calculate the fractional source
coordinate corresponding to each destination pixel, and just grab the
nearest pixel from the original image.
If you’ve?got more cycles to spare, you interpolate between the two
nearest pixels, ie Linear Interpolation. You can use higher order
filters, but beyond cubic (considering the four nearest pixels),
you’ll hardly be able to tell the difference under normal
circumstances.
When scaling images down so much that interpolation filter windows do
not overlap suficiently (ie some source pixels are skipped and not
even considered by the interpolation filters), you need to decimate
the image before interpolating, ie Mipmapping. Usually, you’ll
pre-render down-filtered versions of half size, quarter size, eighth
size etc, until you’ve covered the scaling range you need.
Now, pure interpolation doesn’t handle perspective transforms very
well. When looking at an image at an angle, the smallest projected
dimension has to decide what mipmap and filter to use (or you’d get
“nearest pixel” type of artifacts), impacting the sharpness of the
bigger dimension. Anisotrophic Filtering is the cure, and basically
means you deal with the “big” and “small” dimensions separately, to
avoid filtering away detail that would fit without artifacts.
Anisotrophic filtering and other general computer graphics discussions
are also off topic for this list, so I think I’ll stop here. 
[…]
I have looked at OpenGL but I feel that it’s properly
overkill for this case given that the 3D environment
is fairly simple and I would like to be able to offer
the port to PDA users.
Makes sense. Though I rather like the idea of making use of
accelerated OpenGL whenever it’s available (hint: glSDL), the idea of
totally depending on OpenGL makes me feel uneasy. Only depend on
OpenGL when doing without it would be too much work and/or too slow.
//David Olofson - Programmer, Composer, Open Source Advocate
.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
— http://olofson.net — http://www.reologica.se —On Tuesday 15 November 2005 14.33, Guilherme De Sousa wrote: