SDL on Unified Platforms

Any ideas would be good as I am mostly just trying to get a good
discussion going.

I have been on about Wayland on desktop for a while now and I am sure
some people have been quite tired. I feel 2.0.4 has some API which are
causing some real conflicts between developers also some legacy apis
which need to be updated for a more unified and mobile world.

This is not just a Wayland vs X11 issue. It is something common to all
new unified platform APIs and protocols where the same application may
be run on a Phone, Tablet, Console, PC or some new display like
HoloLens or ValveVR. Mir, Android, iOS and also the new WinRT and I
believe UWP apis all have similar issues. It is my firm belief that a
user should be able to have one binary and use it on any device (with
the same OS) and the application adapt to whatever interface is
available. It may not be great but Microsoft?s UWP approach is exactly
the way we should be heading.

I personally believe some of what we need to do to support these
future platforms is:

  1. Implement a drag and drop API for everything that would follow the
    cursor position. This allows an application to communicate to itself
    as well. Using mime types here is pretty extensible and internally can
    convert to platform specific drag and drop types. Firefox works
    similarly for dragging tabs between windows. I have added an example
    API here https://bugzilla.libsdl.org/show_bug.cgi?id=2946

  2. Add in parent/child window support. Most platforms can support this
    in some form be it though subsurfaces, subviews, or having SDL
    composite the windows frame buffer itself somehow. On some platforms
    child windows (e.g. tooltips) is probably more available than multiple
    windows. https://bugzilla.libsdl.org/show_bug.cgi?id=2918

  3. Deprecate positions for top-level windows and all cursor warping
    apis. If an application really needs to warp a cursor it can draw a
    software cursor and live with the consequences. We could maybe allow
    some hints for setting the cursor position when ungrabbing a cursor or
    disabling SetRelativeMouseMode (which is how Wayland will work) but
    obviously this would not be cross platform.

  4. Allowing some way for an application to get information about the
    capabilities of a platform so it can gracefully fallback if certain
    features like multiple windows are not supported. Also this allows an
    application to pick different CSS, QML, XAML etc. to choose a
    different GUI based on different device types.
    https://bugzilla.libsdl.org/show_bug.cgi?id=3041

  5. Allow applications to specify certain requirements such as a fixed
    aspect ratio and minimum/maximum and default sizes before/during
    window creation (rather than after). Encourage applications to always
    use SDL_WINDOW_RESIZABLE. This means developers do not need to worry
    about a window fitting/placing on a display because it always will.
    See some of my comment here:
    https://bugzilla.libsdl.org/show_bug.cgi?id=2530

  6. Unify all input into one event system similar to the Android
    MotionEvent or WinRT CorePointer. This means developers cannot forget
    to implement touch support for their application and on systems such
    as WinRT where everything is a ?Pointer? it can be supported easily.

  7. Add a system for notifications or toasts which will use a window
    manager specific API (maybe dbus on Linux etc.) or fall back to
    something similar to an sdl message box if there is none available.

Counting fail… been a long week.On Mon, Jul 20, 2015 at 8:31 PM, x414e54 <@x414e54> wrote:

Any ideas would be good as I am mostly just trying to get a good
discussion going.

I have been on about Wayland on desktop for a while now and I am sure
some people have been quite tired. I feel 2.0.4 has some API which are
causing some real conflicts between developers also some legacy apis
which need to be updated for a more unified and mobile world.

This is not just a Wayland vs X11 issue. It is something common to all
new unified platform APIs and protocols where the same application may
be run on a Phone, Tablet, Console, PC or some new display like
HoloLens or ValveVR. Mir, Android, iOS and also the new WinRT and I
believe UWP apis all have similar issues. It is my firm belief that a
user should be able to have one binary and use it on any device (with
the same OS) and the application adapt to whatever interface is
available. It may not be great but Microsoft?s UWP approach is exactly
the way we should be heading.

I personally believe some of what we need to do to support these
future platforms is:

  1. Implement a drag and drop API for everything that would follow the
    cursor position. This allows an application to communicate to itself
    as well. Using mime types here is pretty extensible and internally can
    convert to platform specific drag and drop types. Firefox works
    similarly for dragging tabs between windows. I have added an example
    API here https://bugzilla.libsdl.org/show_bug.cgi?id=2946

  2. Add in parent/child window support. Most platforms can support this
    in some form be it though subsurfaces, subviews, or having SDL
    composite the windows frame buffer itself somehow. On some platforms
    child windows (e.g. tooltips) is probably more available than multiple
    windows. https://bugzilla.libsdl.org/show_bug.cgi?id=2918

  3. Deprecate positions for top-level windows and all cursor warping
    apis. If an application really needs to warp a cursor it can draw a
    software cursor and live with the consequences. We could maybe allow
    some hints for setting the cursor position when ungrabbing a cursor or
    disabling SetRelativeMouseMode (which is how Wayland will work) but
    obviously this would not be cross platform.

  4. Allowing some way for an application to get information about the
    capabilities of a platform so it can gracefully fallback if certain
    features like multiple windows are not supported. Also this allows an
    application to pick different CSS, QML, XAML etc. to choose a
    different GUI based on different device types.
    https://bugzilla.libsdl.org/show_bug.cgi?id=3041

  5. Allow applications to specify certain requirements such as a fixed
    aspect ratio and minimum/maximum and default sizes before/during
    window creation (rather than after). Encourage applications to always
    use SDL_WINDOW_RESIZABLE. This means developers do not need to worry
    about a window fitting/placing on a display because it always will.
    See some of my comment here:
    https://bugzilla.libsdl.org/show_bug.cgi?id=2530

  6. Unify all input into one event system similar to the Android
    MotionEvent or WinRT CorePointer. This means developers cannot forget
    to implement touch support for their application and on systems such
    as WinRT where everything is a ?Pointer? it can be supported easily.

  7. Add a system for notifications or toasts which will use a window
    manager specific API (maybe dbus on Linux etc.) or fall back to
    something similar to an sdl message box if there is none available.

  1. Deprecate positions for top-level windows

This should automatically try to place the window at the position it was,
when the app was last closed. This is IMHO the only use for top-level window positioning anyways.

Message-ID:
<CABrZ_7yvVMTusQaEmCaCYDKRrzgmW+YEOGvT3ff-oqHUgSA_dw at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Any ideas would be good as I am mostly just trying to get a good
discussion going.

<snip: We’re not just Win95 anymore.>

I personally believe some of what we need to do to support these
future platforms is:

  1. Implement a drag and drop API for everything that would follow the
    cursor position. This allows an application to communicate to itself
    as well. Using mime types here is pretty extensible and internally can
    convert to platform specific drag and drop types. Firefox works
    similarly for dragging tabs between windows. I have added an
    example API here https://bugzilla.libsdl.org/show_bug.cgi?id=2946

How available are flexible drag’n’drop APIs? I know there’s some, but
I don’t know if they cover all the major platforms.

How about second-or-below tier platforms, like Haiku, MorphOS and/or
AROS, or QNX?

  1. Add in parent/child window support. Most platforms can support this
    in some form be it though subsurfaces, subviews, or having SDL
    composite the windows frame buffer itself somehow. On some platforms
    child windows (e.g. tooltips) is probably more available than multiple
    windows. https://bugzilla.libsdl.org/show_bug.cgi?id=2918

This belongs in a child library. This is widgeting, and should be
dealt with accordingly. The COM/XPCOM/etc. model is a bit obnoxious,
but the only major downside is that simple reference counting doesn’t
deal with reference loops easily. Reference loops can be dealt with by
simply using a loop-safe garbage-collector scheme, which can be
written to only run when the program calls garbage-collector functions
(I’ve worked out a model for this before, and probably at least a few
others have as well).

I’d thus suggest a COM-ish model, probably something like this:

interface canvas;

interface view
{
int paint_to_canvas
(
canvas *canv,
int dest_x,
int dest_y,
int dest_h,
int dest_w,

    int src_x,
    int src_y,
    int src_h,
    int src_w
);

};
interface canvas
{
int get_size( int *h, int *w );

int apply_texture
(
    pixel *array,
    size_t arr_h,
    size_t arr_y,

    int dest_x,
    int dest_y,
    int dest_h,
    int dest_w
);

};
interface aperture
{
int move_loc( int x, int y );
int set_size( int h, int w );
};

paint_to_canvas() probably shouldn’t have the dest_* arguments, but
other than that I think this is ROUGHLY the right system. The
differentiation between view and canvas is essentially to allow a
single image source to be directed to multiple destinations (I
originally had a particular use case that I was thinking about).
Having the canvas do the actual draw allows it to apply compositing as
it sees fit, which is useful for implementing multiple styles of
compositors (alpha/no alpha, stacking/tiling, whatever else).

A better version would be built around run-time construction of a
widget hierarchy. In such a system, messages should ideally be handed
to parent-most widgets (by the app, whenever it feels like), which in
turn should pass them to the relevant child widgets. The last time I
scratched out a concept for one of these, I actually designed it to
use a message type to trigger widget redraws (I think it carried a
pointer to the canvas equivalent, or perhaps the reply carried the
pixel array data).

Here’s some untested code that I wrote for my most recent stab at it:

#include <stdint.h>
/* For size_t. */
#include <stdlib.h>

typedef struct SDL_EventNode
{
struct SDL_EventNode next;
/
We might want to do IPC with events in the future. /
size_t size;
/
In case we want to “preserve” this. */
uintptr_t refcount;

SDL_Event ev;

} SDL_EventNode;

typedef struct SDL_EventSet
{
SDL_EventNode *first, *last;

} SDL_EventSet;

int SDLEventSet_InitSet( SDL_EventSet *evs )
{
if( evs )
{
evs->first = 0;
evs->last = 0;

    return( 0 );
}

return( -1 );

}

int SDLEventSet_PushBack( SDL_EventSet *evs, SDL_EventNode *ev )
{
if( evs && ev )
{
if( ( evs->first == 0 ) != ( evs->last == 0 ) )
{
return( -2 );
}

    if( evs->first )
    {
        evs->last->next = ev;
        ev->next = 0;
        evs->last = ev;

    } else {

        evs->first = ev;
        evs->last = ev;
        ev->next = 0;
    }

    return( 0 );
}

return( -1 );

}
int SDLEventSet_PopFront( SDL_EventSet *evs, SDL_EventNode **ev )
{
if( evs && ev )
{
if( ( evs->first == 0 ) != ( evs->last == 0 ) )
{
return( -2 );
}

    *ev = evs->first;
    if( evs->first != evs->last )
    {
        evs->first = evs->first->next;

    } else {

        evs->first = 0;
        evs->last = 0;
    }
    if( *ev )
    {
        ( *ev )->next = 0;
    }

    return( 0 );
}

return( -1 );

}

This was intended for a heirarchal system, where the program
registered interfaces with a COM-ish support library, events would be
given to an associated event dispatcher function, and the dispatcher
would both call the target widget, and add the widget’s response
events to the dispatcher’s own internal event list. It was partly to
deal with reference loops, and partly to prevent stack overflows in
case of very deep heirarchies.

The widget definition looked like this:

typedef struct SDL_Widget_i
{
/* IUnknown interface goes here. */

int (*dispatch)( struct SDL_Widget_i*,  SDL_EventSet*,  SDL_EventSet** );

} SDL_Widget_i;

fow the most part, very standard stuff. Remember that rendering was
going to be done by passing around messages: ->dispatch would contain
the rendering code, either receiving target info, or sending rendering
commands through the third argument.

  1. Unify all input into one event system similar to the Android
    MotionEvent or WinRT CorePointer. This means developers cannot forget
    to implement touch support for their application and on systems such
    as WinRT where everything is a ?Pointer? it can be supported easily.

I’ve suggested this before, but I’m not sure how this would prevent
devs from forgetting touch support.

As for this: ( https://bugzilla.libsdl.org/show_bug.cgi?id=3041 ), my
half-semester of web-dev training a few years ago tells me that option
1 (the mm / scaling option) is the correct one.

PARTICULARLY since it allows desktop users to test the code by
resizing the window. I actually did some of this with HTML back around
2003 to make sure that some Java-Script image gallery code was
correctly filling the available space without requiring horizontal
scrolling. It’s really quite useful.

Windows certainly is willing to provide the needed info (you need to
dig a little, but it’s somewhere in there), but unfortunately neither
TVs, nor some monitors, provide the required info. A configuration
system should cover the “problem spots” perfectly well, though, and be
reasonably easy to inline into SDL (you basically just need some sort
of slider, so pretty simple). Probably a mapping between “UI units”
(let the dev decide what this means for their own purposes) and pixels
is a good representation.

More diverse information (sensible mouse hot-spot size, sensible touch
hot-spot size, and sensible text size are not necessarily related) is
useful, but best left to a separate library (back around 2000, I was
shocked to find that Windows didn’t provide all of those, and more).> Date: Mon, 20 Jul 2015 20:31:32 +0900

From: x414e54
To: SDL Development List
Subject: [SDL] SDL on Unified Platforms

The differentiation between view and canvas is essentially to allow
a single image source to be directed to multiple destinations (I
originally had a particular use case that I was thinking about).

Now I remember the use-case: providing both a zoomed-out view and a
zoomed-in view when the window was too small to display everything in
the zoomed-in version. Examples: large multi-document workspaces, and
RTS gameplay fields. Not indispensable, but it keeps all of the code
that figures out how to display an individual “display source” with
the display source in question.