From another perspective, what do we GAIN from removing the functions?
As far as I can see, the only thing we gain from removing the
functions themselves is a inconsequential decrease in library size.
No, what we gain is the ability to actually use the library in a context
other than that with which it was compiled.
For example you can compile with a C89 environment.
But you might want to use the library from C++11.
You cannot do that, if C89 doesn’t have function f,
then SDL provides f, but C++ has f, so you get
a linkage error.
The general rule for linkers is that only one object file
or library can provide any given function, and exactly
one must provide it if required.
Depending on configuration script macros does NOT work
in headers, only when building the library binaries.
In general “feature test” configuration is an extremely bad idea;
they’re very bad if they test compilation ability, and they’re
absurd if they require test code to actually run. They’re
necessary building libraries due to the lack of standardisation
in badly designed systems programmers insist on using.
But if you apply these config script based feature tests
in header files, you can only use those headers in the
exact context the config script ran. Whereas you expect
a library to work provided the ABI is respected.
[Platform or compiler macros are a different issue, they’re
not nice but they’re OK in headers in principle because
they’re standardised by compiler vendors]
But this just doesn’t work, because these types
do not interoperate with other library code because
we don’t know what they are.
However, I’m not certain if proper namespacing would fully address
your point here, so could you be a little more specific?
The problem is exemplified by this:
// WINDOWS
#define OTHER_uint32 int
#define Uint32 long // SDL
// write a 32 bit value to p
void OTHER_LIBRARY_function (OTHER_uint32 *p);
// provide a 32 bit value via a pointer
Uint32 *SDL_LIBRARY_function (void);
//TYPE ERROR (int*, long* not compatible)
OTHER_LIBRARY_function ( SDL_LIBRARY_function ());
// BREAK STRICT ALIASING RULES
OTHER_LIBRARY_function (
(OTHER_uint32*) SDL_LIBRARY_function ());
There is in fact no simple solution because the client does not know what
type OTHER_unit32 or Uint32 actually is. Knowing that they’re 32 bits
unsigned integers isn’t enough. [In this example you might use
copyup/copydown to solve the problem but it doesn’t work in
general because for example you might invoke a callback
and not know it refers to a variable that should have been modified
but wasn’t because we haven’t copied back yet. Or threads
or volatiles might be involved]
If both libraries used uint32_t from stdint.h instead of their own
private types, there’s no problem: they’re using the same type.
But stdint.h doesn’t exist on Windows so what do we do?
There is ONLY one correct solution. If SDL insists on C89 then
the USER must provide uint32_t. By specification, SDL cannot
do so. The logic is watertight I’m afraid. There’s no choice
in the matter. SDL must use int32_t and it must require the
client programmer provide it, assuming it insists on C89.
Attempting to provide both leads to an illformed contract
(in the sense of programming by contract).
The only issue here is how to effect this in a convenient way.
One solution is:
#include "SDL2.h"
This header is the pure SDL library header and it REQUIRES the
client to provide int32_t etc.
#include "SDL.h"
This is a compatibility hack. It provides int32_t itself.
On Linux or whatever has stdint.h it reads:
#include <stdint.h>
#include "SDL2.h"
On Windows, it says:
#define uint32_t int
...
#include "SDL2.h"
Sam actually suggested another model:
#define _SDL_stdinc_h
#include "SDL.h"
Here the #define prevents SDL_stdinc.h being included in SDL.h
because that’s the guard used. Now I can write:
#include <stdint.h>
#define Uint32 uint32_t
#include "SDL.h"
for example. That is not as clean but it solves the type problem
by delegating it to the user, and it’s compatible with existing
code. I don’t like this because I WANT to break all the existing
code, because all of it is wrong. And I want the documented
interfaces to use uint32_t NOT Uint32.
There’s a related issue with #define main which is also a seriously
bad idea. SDL needs to cleanly separate
(a) A library
(b) A restricted kind of development environment
I have no problem with hacks to support easy development
of toy games provided they’re separated from the library,
which has lot of other more serious uses (like commercial
games, GUI’s, or the text editor I’m writing at the moment).
The pure library part is particularly important for people
creating language bindings (such as myself). Of course
If you make an app using C99 … then YOU are also creating
a language binding (the binding of C89 to C89 is reasonably
simple one hopes 
In summary: portability hackery belongs in library implementations.
It must not be put in interfaces. Interfaces are designed to support
interoperability and this requires strict adherence to the programming
by contract model or all hell breaks loose. If hackery is needed the
client programmer is the only one that can provide it because
only the client can deal with issues arising from multiple library
interfaces. It has to be on the head of the app developer.On 18/06/2013, at 12:51 PM, Jared Maddox wrote:
–
john skaller
@john_skaller
http://felix-lang.org