SDL3 Camera

Is there a simple example somewhere of how this works with the camera.

I tried around with SDL_OpenCameraDevice and SDL_AcquireCameraFrame.
I got a camera device. But with Sureface nothing comes.

SDL_Camera is a pretty new addition to the library
In the github source code for SDL there is the “test” folder. In that folder is the file testcamera.c.
It’s not a huge file at 274 lines of code, and it shows how to open the camera interface. It is not as simple as just requesting a frame, you have to select between possibly multiple cameras, do a bit of setup, then open the device. In this regard it is similar to opening an audio device from SDL.

This file also uses a callback interface instead of main().

If you’re on Linux you might suffer from the following bug:

In the github source code for SDL there is the “test” folder. In that folder is the file testcamera.c .

Is it possible to compile this example directly?
I tried the following, only “ld” gives me lots of errors.

gcc testcamera.c -lSDL3 -lSDL3_test -o test

It’s set up to be built at the same time as the whole library using cmake then make, which means you need to download the whole SDL source and create an external build folder.
So, from a build folder that sits next to the SDL source folder, run this:

cmake -S ../SDL -B . -DSDL_TESTS=ON
make

After you run that, the build folder will have a bunch of files and folders in it, including a new “test” folder. The testcamera program can be found there.

If you use SDL 3.1.0 (Preview Release) you need to use an older version of testcamera.c

Thanks everyone, my camera brought pictures.

@GuildedDoughnut
I didn’t know this, there are more interesting things there. Or even a description of it?
The “-B.” you can leave it out since it is the current folder anyway.

@Peter87
I took the latest version and it worked.

testcamera.c doesn’t work if the camera’s native format isn’t supported by the renderer. In my case, the camera only supports UYVY. Querying the renderer with SDL_GetRendererInfo(), it doesn’t show this format as supported, but the camera texture is created successfully anyway, and the screen shows garbage.

Shouldn’t it be a bug that a texture is being created with a format the renderer doesn’t support? The docs say SDL_CreateTexture() will fail if the texture format isn’t supported:

Returns a pointer to the created texture or NULL if no rendering context was active, the format was unsupported, or the width or height were out of range; call SDL_GetError() for more information.

I agree, that should not run.
If you haven’t already, do you mind running SDL_GetCameraDeviceSupportedFormats() to double check that SDL is correctly interpreting the camera’s format?

On another note, thinking about your camera:

This is a shot in the dark, but in SDL/src/video/SDL_yuv.c I do see some internal functions for converting UYVY pixel data to similar formats such as YUV2.
SDL_video.h is included in SDL_camera.c, so I am curious if there’s possibly some logic that will do the data conversion if you were to request a YUV2 destination texture instead… if that’s on the renderer’s supported formats list.
I’ll try to dig deeper, can you share the list reported by SDL_GetRendererInfo()?

Using SDL_GetCameraDeviceSupportedFormats() is how I found out the camera only supports UYVY.

But here:

Supported texture formats for renderer: metal
SDL_PIXELFORMAT_ARGB8888
SDL_PIXELFORMAT_ABGR8888
SDL_PIXELFORMAT_XBGR2101010
SDL_PIXELFORMAT_RGBA64_FLOAT
SDL_PIXELFORMAT_RGBA128_FLOAT
SDL_PIXELFORMAT_YV12
SDL_PIXELFORMAT_IYUV
SDL_PIXELFORMAT_NV12
SDL_PIXELFORMAT_NV21
SDL_PIXELFORMAT_P010

Supported native camera formats:
1920 x 1080 : 1 / 7: SDL_PIXELFORMAT_UYVY
1920 x 1080 : 1 / 6: SDL_PIXELFORMAT_UYVY
1280 x 1024 : 1 / 7: SDL_PIXELFORMAT_UYVY
1280 x 1024 : 1 / 6: SDL_PIXELFORMAT_UYVY
1280 x 720 : 1 / 10: SDL_PIXELFORMAT_UYVY
1280 x 720 : 1 / 9: SDL_PIXELFORMAT_UYVY
1024 x 768 : 1 / 7: SDL_PIXELFORMAT_UYVY
1024 x 768 : 1 / 6: SDL_PIXELFORMAT_UYVY
800 x 600 : 1 / 20: SDL_PIXELFORMAT_UYVY
640 x 480 : 1 / 31: SDL_PIXELFORMAT_UYVY
640 x 480 : 1 / 30: SDL_PIXELFORMAT_UYVY
320 x 240 : 1 / 31: SDL_PIXELFORMAT_UYVY
320 x 240 : 1 / 30: SDL_PIXELFORMAT_UYVY

I don’t know the full capabilities, but we’ve got SDL3/SDL_ConvertSurface - SDL Wiki and it’s sibling SDL3/SDL_ConvertPixels - SDL Wiki (the latter might be preffered if you are using a streaming texture).

In the testcamera.c, SDL_AppIterate function:

// We seem to be grabbing a frame from the camera and slamming it into a surface like this:
SDL_Surface *frame_next = camera ? SDL_AcquireCameraFrame(camera, &timestampNS) : NULL;
if(frame_current)
{
    SDL_ReleaseCameraFrame(camera, frame_current);
}
frame_current = frame_next;

//Where frame_current is a global SDL_Surface pointer.

//Next that surface is being used to update the texture
SDL_UpdateTexture(texture, NULL, frame_current->pixels, frame_current->pitch);

So maybe try one of the above SDL_ConvertX functions to swap UYVY to ARGB8888 in between?
Sorry, I did read the source on the Convert functions, but it just wasn’t clear to me if they could handle that type of conversion or not.

Here’s a very light test of the surface conversion function:

#include <SDL3/SDL.h>
#include <cstdlib>

int main()
{
	SDL_Init(SDL_INIT_VIDEO);

	SDL_Surface * surf = SDL_CreateSurface(800, 800, SDL_PIXELFORMAT_UYVY);
	if(!surf)
	{
		SDL_Log("Could not create UYVY surface.");
		SDL_Quit();
		exit(0);
	}

	if(surf->format->format == SDL_PIXELFORMAT_UYVY)
	{
		SDL_Log("Created surface with UYVY format");
	}

	SDL_PixelFormat *want = SDL_CreatePixelFormat(SDL_PIXELFORMAT_ARGB8888);
	SDL_Surface * argbSurf = SDL_ConvertSurface(surf, want);
	SDL_DestroyPixelFormat(want);
	SDL_DestroySurface(surf);

	if(argbSurf->format->format == SDL_PIXELFORMAT_ARGB8888)
	{
		SDL_Log("Apparent success!");
	}

	SDL_DestroySurface(argbSurf);
	SDL_Quit();
}

So the issue isn’t that I don’t know how to get SDL to convert from UYVY to RGBA or whatever, it’s that testcamera.c requests SDL 3 create a texture with a pixel format the renderer doesn’t support, and instead of the texture creation failing, it succeeds.

And since it’s a format the renderer doesn’t support, you get garbage on the screen.

As to pixel format conversion, the camera API lets you pick a pixel format and then it’ll convert the camera’s native format to that for you (modifying testcamera.c to request RGBA32 instead of the native format, with an RGBA32 texture to match, works fine and I can see images from the camera).

I thought the camera was not working with the renderer and that was something I could help with… Well, at least I now know how to convert to/from YUV colorspace, so I’m not at a total loss for that.

About the actual issue, it looks like the default behavior for SDL_CreateTexture in this situation is to return a texture with “Closest supported format”, so you might have to query the returned texture to see what you are actually getting.

Now that I know this, I don’t like the idea that I now have to double check that I’m getting the format that I specifically asked for. I agree with you that it would be proper to return NULL at this point.

On the other hand, I do think it’s understandable that it is set up this way since it’s likely most users just want to load png’s onto the screen and not really care about the underlying texture’s format. It’s kind of in-line with their desire for SDL_Texture to act like a blackbox.

An issue/bug report probably should be logged over at github.

Looking at the code, even when SDL picks a different pixel format, it saves the requested format as the texture’s format instead of the one it picked for you. So calling SDL_QueryTexture() tells you the format you asked for, not the one you actually got!

The assumption that the requested format should just be a suggestion falls apart when you need to change the texture’s contents. It’s why the camera feed is showing garbage on the screen: SDL has silently chosen a different format, but the texture’s contents are being replaced with pixel data in the camera’s native pixel format.

SDL_CreateTexture()'s whole reason for existing is to create an empty texture whose contents are supplied later. With no way of knowing what pixel format you actually got, you cannot guarantee you’re providing pixel data in the matching pixel format.

2 Likes

Ultimately it comes down to this:

There are functions to detect the capabilities of the system. There are functions that could have converted the data to the system’s supported formats.
The code requested a format that is not supported by the system.
It got a texture back in that format, and the data is successfully loaded into that texture.
Now the system cannot display that data in a meaningful manner.
Not great, but ultimately a preventable situation on the programmer’s part.

But, what if someone is working on a system with a renderer that does not support YUV, but they want to work in the YUV colorspace? They can still create the texture/surface as a container, work on the data in YUV, and then convert that data back to ARGB8888.

So my question is, is that data from the camera actually corrupted by being held in an unsupported format, or there still a path to convert it back to ARGB from that texture?

Perhaps the error flag should instead be raised when the code asks to render an unsupported texture?
But do we really want to add more logic to the SDL_RenderTexture function?

Please don’t mind me, I’m just being devil’s advocate to try and sound this out.

Are you 100% on this? QueryTexture reports texture->format, which is what I thought I was seeing being loaded with closest_format. - Um… no you’re probably right.
Sorry, it was this bit I was thinking of, not closest_format:

        if (SDL_ISPIXELFORMAT_FOURCC(texture->format)) {
#if SDL_HAVE_YUV
            texture->yuv = SDL_SW_CreateYUVTexture(format, w, h);
#else
            SDL_SetError("SDL not built with YUV support");
#endif
            if (!texture->yuv) {
                SDL_DestroyTexture(texture);
                return NULL;
            }
        } else if (access == SDL_TEXTUREACCESS_STREAMING) {
            /* The pitch is 4 byte aligned */
            texture->pitch = (((w * SDL_BYTESPERPIXEL(format)) + 3) & ~3);
            texture->pixels = SDL_calloc(1, (size_t)texture->pitch * h);
            if (!texture->pixels) {
                SDL_DestroyTexture(texture);
                return NULL;
            }
        }

So in your situation, UYVY is fourcc, so you are getting the format that you requested?

No, it didn’t get a texture back in that format. That’s the whole problem. It got a texture back in a different format, and has no way to determine what that different format is. And since it can’t determine what that format is, it can’t do any pixel format conversion.

IDK what you’re getting at here. According to the camera API documentation, on some systems there’s no way to determine what pixel formats the camera supports until you actually open the camera device.

The data from the camera isn’t getting corrupted. It’s being misinterpreted by the GPU, because the code is uploading pixel data in the format it thinks the texture was created with.

I’m looking at the code for SDL_CreateTextureWithProperties() right now (which is what SDL_CreateTexture() calls to do the actual work).

Nowhere is closest_format assigned back to texture->format.

I also verified this by testing. Adding a call to SDL_QueryTexture() in testcamera.c after the texture is created, it reports the format as UYVY.

Sorry, I got the final edit in above just as you posted. I edit my posts too much.

Soooo…
If the requested pixel format is unsupported, it creates one in a format that is supported by the renderer and stores it in texture->native, and if it’s a fourcc format then it also creates a CPU-side staging YUV texture in the requested format, saved to texture->yuv.

Looking at SDL_UpdateTexture(), it looks like what’s supposed to happen is that if texture->yuv isn’t NULL then it assumes the given fourcc format isn’t supported by the renderer and does a conversion from texture->format (which the incoming pixel data is in) to whatever format texture->native is in, and then uploads that to texture->native.

But apparently this isn’t working, because the camera feed is shown as junk. The actual conversion routines work, because creating the texture as RGBA32 and then requesting the camera API give me RGBA32 data instead of the camera’s native format works just fine and the camera feed is displayed successfully.

Maybe this could help you. I have tested various webcams and also a camera, which I access via the v4l2 lib.
A webcam necessarily needs V4L2_PIX_FMT_YUYV for a format with IOCtl. b
With the other webcams and also with the camera, I can write whatever I want in there, the same thing always comes out of the camera.
So almost every device reacts differently.