Hello everyone.
I apologise in advance if this is a bit of a rant…
I am at a loss here. I don’t know how many hours I’ve spent googling and
reading source code to try and sort this one. There seems to be
conflicting information on the state of SDLNet. It also seems that most
people either don’t care or haven’t encountered this problem, despite
its severity.
I am using the current version (1.2.7) of SDLNet, mainly on Linux. The
problem is that SDLNet_TCP_Send can block, if the buffer gets full (I
assume it’s a per-socket buffer). This is plain stupid, and makes
SDLNet completely useless. I would be fine with this, if there was a
way of checking if it would block or not. But there isn’t. Not that I
can find.
To reproduce problem: run a simple server which continuously sends data
to clients. Run a simple client in gdb. Connect to server. Get gdb to
pause client using ctrl-c. In a few seconds the server will freeze, as
SDLNet_TCP_Send blocks. The connection is still live, but no data is
received so the buffer gets full. That said, I don’t think it will
handle timeouts properly either, since I’m sure they are longer than it
would take to fill the buffer.
I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a
good idea. Plus, a significant number of people say that threading is an
unnecessary complication to game networking. And I completely agree,
since mine is completely fine without, except for this one stupidity.
Forgive me if I’m being rude here, but I cannot see how anyone could
possibly think that blocking on a send call with no way of checking is
possibly a good idea. Maybe this is just really hard to do because of
the sockets implementation on various platforms. Maybe it isn’t. I did
find a patch (see below) which seemed to manage fine, so it can’t be
that hard.
There’s a patch here in April 2004 for version 1.2.6 to add nonblocking,
but it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html
The worst part is that the latest version of SDLNet sets sockets to
non-blocking for the accept call, and then switches back to blocking
mode after accepting. It also specifically checks if O_NONBLOCK is
defined, and if it is, uses fcntl() to TURN IT OFF!!! WHAT THE HELL- WHY?
- Why does SDLNet have this behaviour?
- In any case, why…
/THE/
/HELL/
…is there not a function to check beforehand if it will block or not? - People using SDLNet- how do you get around this?
Don’t get me wrong, SDLNet is a great library- good job- except for this
one problem which makes it useless. Once this is fixed I shall have
great pleasure in using it.
Maybe I’m being an idiot. In which case please could somebody correct my
stupidity.
Thanks in advance.
Simon