Rather than waiting to find out if anybody wanted it, I just went
ahead and did it.
Similar to the Mac/ImageIO based backend I just submitted, this one
uses the native UIImage to load gif, png, jpeg, tiff, and bmp.
Unfortunately, ImageIO is not on the iPhone, so I couldn’t just reuse
the code. The CGImage->SDL_Surface is the same code (copy/paste) which
is actually the most complicated function in the implementation, but
everything has been changed.
Also, this required one change to IMG.c. Unfortunately, UIImage lacks
a good stream input capability which makes reading from SDL_RWops
really difficult. My current implementation exhaustively reads the
RWops into a buffer and then sends it to UIImage as an NSData which I
presume isn’t going to be efficient. So, for the iPhone, I decoupled
the IMG_LoadFile from the RWops and wrote a direct implementation that
uses UIImage’s load from file method.
Mac’s ImageIO implicitly suggests that using the file loading method
may offer some advantages, so I’m wondering if I should decouple it
I’m not sure if there is a better solution for the UIImage/RWops.
Right now my only thought is that we could subclass NSData in such a
way that it pulls from an RWops on demand. But this would require some
under-the-hood understanding of how UIImage reads data from NSData. At
best, this would be really intricate. At worst, I don’t know if it
would really work or be any different than reading the whole buffer.
I’ve uploaded another repo here:
It started as a clone of the ImageIO branch, so it contains the
ImageIO files too.
Unlike the ImageIO stuff, I haven’t tested the UIImage stuff. So I
need iPhone testers. If people could try it out and let me know how it
goes, I would appreciate it.