SDL_RenderGeometry and strict aliasing

Am I the only one not using and passing around SDL_Color in the parts of their program that don’t touch SDL? I don’t really care about big-endian systems, since both x86 and ARM are little-endian (yes, ARM can do big-endian, etc, etc, etc, but nobody uses it in big-endian mode).

Like, if I’m not storing color as a vec4 of floats, such as when pixel plotting, it’s packed into a uint32_t.

In my code, colors are stored in a struct of float components when working with arbitrary / high precision, and a struct of uint8 components when working with 8 bit per component precision. I have no need to encode the whole color as a 32 bit integer at all anywhere in my codebase.

I don’t use SDL’s own SDL_Color structure, but my own struct is trivially castable to it.

Even if the not-SDL-specific code packs the color into a uint32_t, how likely is is it that the generic code uses exactly the format that matches SDL_Color - depending on endianess (if you care about it)?
On Little Endian that uint32_t would be in (the equivalent of) SDL_PIXELFORMAT_ABGR8888 format, which seems a bit obscure to me when storing the color in an integer (unless you’re explicitly targeting SDL2’s render API, in which case you could as well use its type)