Best Practices for Indexed Color in SDL2

I’m attempting to update an old emulator that uses indexed / palatalized color from SDL1 to SDL2 and I’ve encountered some issues and need some guidance.

It appears that the preferred method is to create a SDL_Renderer using SDL_CreateRenderer(), then create a SDL_Texture with SDL_TEXTUREACCESS_STREAMING using SDL_CreateTexture(), and then lastly create a SDL_Surface with an 8-bit depth using SDL_CreateRGBSurface(). You can then directly access the SDL_Surface->pixels array by locking and unlocking the texture between your SDL_RenderClear() and SDL_RenderCopy() + SDL_RenderPresent() functions. Right?

A second method supposedly is to link a 32-bit surface to your texture and then blit an 8-bit surface to your 32-bit surface. But isn’t SDL support to do format conversions automatically when copying surfaces to textures? If so, what is the advantage?

And how feasible would it be to skip the SDL_Renderer and SDL_Texture system completely by grabbing the default window surface using SDL_GetWindowSurface() and then blitting an 8-bit surface to it? And why can’t I pass a custom surface bit depth or a pointer to another surface when I call SDL_CreateWindow()?

Lastly, I notice that SDL_Surface->pixels is a VoidPtr as opposed to an UInt32Ptr. Is this some legacy SDL1 hold-over or is SDL2 smart enough to choose between 8/16/32 UInt types when allocating the pixel array? My experience is that no matter what bit depth I feed SDL_CreateRGBSurface(), it seems to be allocating UInt32s. If I create an 8-bit surface and then do a memset(surface->pixels, bData, (surface->w * surface->h * sizeof(byte))), only a quarter of the surface is ever updated.