SDLNet_TCP_Send blocking

Hello everyone.
I apologise in advance if this is a bit of a rant…

I am at a loss here. I don’t know how many hours I’ve spent googling and
reading source code to try and sort this one. There seems to be
conflicting information on the state of SDLNet. It also seems that most
people either don’t care or haven’t encountered this problem, despite
its severity.

I am using the current version (1.2.7) of SDLNet, mainly on Linux. The
problem is that SDLNet_TCP_Send can block, if the buffer gets full (I
assume it’s a per-socket buffer). This is plain stupid, and makes
SDLNet completely useless. I would be fine with this, if there was a
way of checking if it would block or not. But there isn’t. Not that I
can find.

To reproduce problem: run a simple server which continuously sends data
to clients. Run a simple client in gdb. Connect to server. Get gdb to
pause client using ctrl-c. In a few seconds the server will freeze, as
SDLNet_TCP_Send blocks. The connection is still live, but no data is
received so the buffer gets full. That said, I don’t think it will
handle timeouts properly either, since I’m sure they are longer than it
would take to fill the buffer.

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a
good idea. Plus, a significant number of people say that threading is an
unnecessary complication to game networking. And I completely agree,
since mine is completely fine without, except for this one stupidity.

Forgive me if I’m being rude here, but I cannot see how anyone could
possibly think that blocking on a send call with no way of checking is
possibly a good idea. Maybe this is just really hard to do because of
the sockets implementation on various platforms. Maybe it isn’t. I did
find a patch (see below) which seemed to manage fine, so it can’t be
that hard.

There’s a patch here in April 2004 for version 1.2.6 to add nonblocking,
but it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html

The worst part is that the latest version of SDLNet sets sockets to
non-blocking for the accept call, and then switches back to blocking
mode after accepting. It also specifically checks if O_NONBLOCK is
defined, and if it is, uses fcntl() to TURN IT OFF!!! WHAT THE HELL- WHY?

  1. Why does SDLNet have this behaviour?
  2. In any case, why…
    /THE/
    /HELL/
    …is there not a function to check beforehand if it will block or not?
  3. People using SDLNet- how do you get around this?

Don’t get me wrong, SDLNet is a great library- good job- except for this
one problem which makes it useless. Once this is fixed I shall have
great pleasure in using it.

Maybe I’m being an idiot. In which case please could somebody correct my
stupidity.

Thanks in advance.
Simon

Maybe I’m being an idiot. In which case please could somebody correct my
stupidity.

A little too excited maybe.

Forgive me if I’m being rude here, but I cannot see how anyone could
possibly think that blocking on a send call with no way of checking is
possibly a good idea. Maybe this is just really hard to do because of the
sockets implementation on various platforms. Maybe it isn’t. I did find a
patch (see below) which seemed to manage fine, so it can’t be that hard.

Possibly there is some platform which doesn’t support nonblocking
send()/write() on TCP sockets, and SDLNet decided it wanted
portability more than anything else.

There’s a patch here in April 2004 for version 1.2.6 to add nonblocking, but
it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html

Then use that! You can use your own SDLNet libs, you don’t need to
use the official ones. I would do that in your situation, but I
appreciate that you took the time to Google the problem, and then
bring it to our attention.

Who is in charge of SDLNet maintenance?

  1. People using SDLNet- how do you get around this?

Probably sockets.On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams wrote:


http://codebad.com/

err… threads… probably threadsOn Thu, Mar 26, 2009 at 3:21 PM, Donny Viszneki <@Donny_Viszneki> wrote:

On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams

  1. People using SDLNet- how do you get around this?

Probably sockets.


http://codebad.com/

  1. People using SDLNet- how do you get around this?

We assume SDL_Net’s internal structures haven’t changed, and go in behind
its back and make the sockets non-blocking using platform-specific calls :C

GregoryOn Thu, 26 Mar 2009, Simon Williams wrote:

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a good
idea.

Why not?

Plus, a significant number of people say that threading is an
unnecessary complication to game networking. And I completely agree, since
mine is completely fine without, except for this one stupidity.

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

Although in today’s multicore world, threading is becoming more and
more beneficial, so I would say that entire rule should be questioned.On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams wrote:


http://codebad.com/

This is an old old argument. I raised it many years ago. I ran in to it
while writing NET2 (you can find it at gameprogrammer.com). The answer is
that some platforms do not support non-blocking writes to a socket. SDL has
always gone with the lowest common denominator.

NET2 uses one (1) thread to create an event based layer on top of SDL and
SDLNet. At the time, I really hated the idea of using a pair of threads per
socket to handle I/O, so I didn’t. But, Moore’s Law has cranked over several
times since then and multicore processors are the norm, there is no longer
a good reason not to use many threads in and I/O library. There is nothing
wrong with having a thread block for ages. Really, nothing at all.

Your comment about threading being an unneeded complication in game
programming is rather like discussing the pros and cons of different kinds
of buggy whips while Mr. Ford is building a manufacturing plant down the
road from you. Multicore is here and threading is necessary if you are going
to use the CPU power of modern processors. Really, just use a pair of
threads per socket. One to read and one to write. Be prepared to have to
buffer output in your own code. Look into how you control the stack space
allocation for threads so you don’t waste RAM on simple threads.

Just go with the flow.

Bob PendletonOn Thu, Mar 26, 2009 at 1:16 PM, Simon Williams wrote:

Hello everyone.
I apologise in advance if this is a bit of a rant…

I am at a loss here. I don’t know how many hours I’ve spent googling and
reading source code to try and sort this one. There seems to be conflicting
information on the state of SDLNet. It also seems that most people either
don’t care or haven’t encountered this problem, despite its severity.

I am using the current version (1.2.7) of SDLNet, mainly on Linux. The
problem is that SDLNet_TCP_Send can block, if the buffer gets full (I assume
it’s a per-socket buffer). This is plain stupid, and makes SDLNet
completely useless. I would be fine with this, if there was a way of
checking if it would block or not. But there isn’t. Not that I can find.

To reproduce problem: run a simple server which continuously sends data to
clients. Run a simple client in gdb. Connect to server. Get gdb to pause
client using ctrl-c. In a few seconds the server will freeze, as
SDLNet_TCP_Send blocks. The connection is still live, but no data is
received so the buffer gets full. That said, I don’t think it will handle
timeouts properly either, since I’m sure they are longer than it would take
to fill the buffer.

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a good
idea. Plus, a significant number of people say that threading is an
unnecessary complication to game networking. And I completely agree, since
mine is completely fine without, except for this one stupidity.

Forgive me if I’m being rude here, but I cannot see how anyone could
possibly think that blocking on a send call with no way of checking is
possibly a good idea. Maybe this is just really hard to do because of the
sockets implementation on various platforms. Maybe it isn’t. I did find a
patch (see below) which seemed to manage fine, so it can’t be that hard.

There’s a patch here in April 2004 for version 1.2.6 to add nonblocking,
but it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html

The worst part is that the latest version of SDLNet sets sockets to
non-blocking for the accept call, and then switches back to blocking mode
after accepting. It also specifically checks if O_NONBLOCK is defined, and
if it is, uses fcntl() to TURN IT OFF!!! WHAT THE HELL- WHY?

  1. Why does SDLNet have this behaviour?
  2. In any case, why…
    /THE/
    /HELL/
    …is there not a function to check beforehand if it will block or not?
  3. People using SDLNet- how do you get around this?

Don’t get me wrong, SDLNet is a great library- good job- except for this
one problem which makes it useless. Once this is fixed I shall have great
pleasure in using it.

Maybe I’m being an idiot. In which case please could somebody correct my
stupidity.

Thanks in advance.
Simon


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

  1. Why does SDLNet have this behaviour?

I suspect that only trivial (or little tested) programs use SDL_net
for TCP. Or SDL_net at all? UDP sending is done without non-blocking
mode set, so that one is liable to block too…

  1. In any case, why…
    /THE/
    /HELL/
    …is there not a function to check beforehand if it will block or not?

That one isn’t possible. You can use select() to check if there’s any
space at all in the socket buffers first, but that doesn’t tell you
how much space (so the only thing guaranteed not to block is writing a
single byte, basically).

  1. People using SDLNet- how do you get around this?

I use WvStreams instead. ;-)On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams wrote:


http://pphaneuf.livejournal.com/

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a
good
idea.

Why not?

Plus, a significant number of people say that threading is an
unnecessary complication to game networking. And I completely agree,
since
mine is completely fine without, except for this one stupidity.

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

Although in today’s multicore world, threading is becoming more and
more beneficial, so I would say that entire rule should be questioned.


http://codebad.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Donny, we have got to stop tag teaming the poor folks. It just isn’t fair
:slight_smile:

Bob PendletonOn Thu, Mar 26, 2009 at 2:33 PM, Donny Viszneki <donny.viszneki at gmail.com>wrote:

On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams wrote:


±----------------------------------------------------------

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

Although in today’s multicore world, threading is becoming more and
more beneficial, so I would say that entire rule should be questioned.

We had also tried to follow this rule in the beginning of our
development (of the game OpenLieroX). But after a while, we had some
single threads for some very special tasks (like calculating a path
for the bot, which just takes too long). It was OK to have threads for
some easy tasks.

(We have later just ignored this rule and made our game heavilly
multithreaded. Right now, we have our own threadpool with around 40
threads by default and we use also most of them.)

Anyway, to the problem itself: We are using HawkNL, most things are
non blocking there (not all as I have figured out lately, e.g.
nlCloseSocket is not). For all the things which are not non-blocking,
we have workarounds.Am 26.03.2009 um 20:33 schrieb Donny Viszneki:

On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams

  1. People using SDLNet- how do you get around this?

Probably sockets.

err… threads… probably threads

Asynchronous IO from Boost.org. If you do not know about Boost.org, go there
and be happy.

Bob Pendleton

P.S.

No Donny, that was not aimed at you. It was aimed at everyone else.On Thu, Mar 26, 2009 at 2:24 PM, Donny Viszneki <donny.viszneki at gmail.com>wrote:

On Thu, Mar 26, 2009 at 3:21 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote:


http://codebad.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

This is an old old argument. I raised it many years ago. I ran in to
it while writing NET2 (you can find it at gameprogrammer.com). The
answer is that some platforms do not support non-blocking writes to
a socket. SDL has always gone with the lowest common denominator.

NET2 uses one (1) thread to create an event based layer on top of
SDL and SDLNet. At the time, I really hated the idea of using a pair
of threads per socket to handle I/O, so I didn’t. But, Moore’s Law
has cranked over several times since then and multicore processors
are the norm, there is no longer a good reason not to use many
threads in and I/O library. There is nothing wrong with having a
thread block for ages. Really, nothing at all.

That is a problem. Esp. when you just have one single thread for the
networking stuff. That means that if one tasks blocks it on your
server, you will not receive any data anymore from any other client in
the meantime. (Or you cannot do whatever that is what the thread is
doing.)

For example, until I stumbled to the problem that closesocket() is
blocking, I always wondered why the server registering (to a
masterserver) always blocked our mainthread for about 10 seconds,
while we used threads in our HTTP class. That was, when we start to
send out the request, we cleaned up the old HTTP object first. While
cleaning up, it has waited for the HTTP-thread to terminate. Thus, we
blocked the main thread by it.

I think it is generally a bad idea to workaround with threads to
blocking problems. There are still cases where you still get hit by
the blocking and it’s hard to think of all. Or you must create new
threads all the time if another one is blocking but that is also not
nice.Am 26.03.2009 um 20:38 schrieb Bob Pendleton:

Your comment about threading being an unneeded complication in game
programming is rather like discussing the pros and cons of different
kinds of buggy whips while Mr. Ford is building a manufacturing
plant down the road from you. Multicore is here and threading is
necessary if you are going to use the CPU power of modern
processors. Really, just use a pair of threads per socket. One to
read and one to write. Be prepared to have to buffer output in your
own code. Look into how you control the stack space allocation for
threads so you don’t waste RAM on simple threads.

Just go with the flow.

Bob Pendleton

On Thu, Mar 26, 2009 at 1:16 PM, Simon Williams <simon at systemparadox.co.uk wrote:
Hello everyone.
I apologise in advance if this is a bit of a rant…

I am at a loss here. I don’t know how many hours I’ve spent googling
and reading source code to try and sort this one. There seems to be
conflicting information on the state of SDLNet. It also seems that
most people either don’t care or haven’t encountered this problem,
despite its severity.

I am using the current version (1.2.7) of SDLNet, mainly on Linux.
The problem is that SDLNet_TCP_Send can block, if the buffer gets
full (I assume it’s a per-socket buffer). This is plain stupid,
and makes SDLNet completely useless. I would be fine with this, if
there was a way of checking if it would block or not. But there
isn’t. Not that I can find.

To reproduce problem: run a simple server which continuously sends
data to clients. Run a simple client in gdb. Connect to server. Get
gdb to pause client using ctrl-c. In a few seconds the server will
freeze, as SDLNet_TCP_Send blocks. The connection is still live, but
no data is received so the buffer gets full. That said, I don’t
think it will handle timeouts properly either, since I’m sure they
are longer than it would take to fill the buffer.

I cannot find any way around this problem. Even if I was to use
threads, this would result in a thread blocking for ages, which
isn’t really a good idea. Plus, a significant number of people say
that threading is an unnecessary complication to game networking.
And I completely agree, since mine is completely fine without,
except for this one stupidity.

Forgive me if I’m being rude here, but I cannot see how anyone could
possibly think that blocking on a send call with no way of checking
is possibly a good idea. Maybe this is just really hard to do
because of the sockets implementation on various platforms. Maybe it
isn’t. I did find a patch (see below) which seemed to manage fine,
so it can’t be that hard.

There’s a patch here in April 2004 for version 1.2.6 to add
nonblocking, but it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html

The worst part is that the latest version of SDLNet sets sockets to
non-blocking for the accept call, and then switches back to blocking
mode after accepting. It also specifically checks if O_NONBLOCK is
defined, and if it is, uses fcntl() to TURN IT OFF!!! WHAT THE HELL-
WHY?

  1. Why does SDLNet have this behaviour?
  2. In any case, why…
    /THE/
    /HELL/
    …is there not a function to check beforehand if it will block or not?
  3. People using SDLNet- how do you get around this?

Don’t get me wrong, SDLNet is a great library- good job- except for
this one problem which makes it useless. Once this is fixed I shall
have great pleasure in using it.

Maybe I’m being an idiot. In which case please could somebody
correct my stupidity.

Thanks in advance.
Simon


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

This is an old old argument. I raised it many years ago. I ran in to it while writing NET2 (you can find it at gameprogrammer.com). The answer is that some platforms do not support non-blocking writes to a socket. SDL has always gone with the lowest common denominator.

NET2 uses one (1) thread to create an event based layer on top of SDL and SDLNet. At the time, I really hated the idea of using a pair of threads per socket to handle I/O, so I didn’t. But, Moore’s Law has cranked over several times since then and multicore processors are the norm, there is no longer a good reason not to use many threads in and I/O library. There is nothing wrong with having a thread block for ages. Really, nothing at all.

Every time I hear someone make the old “Moore’s Law lets you get away with writing inefficient code” argument, I can’t help but wonder what rock they’ve been living under for the past six years. (And yes, I’m quite aware that that article’s about the benefits of multi-threading. That’s not the point.) Moore’s law is no longer with us and we can’t depend on it for help anymore. If it was, we’d be running on ~40 GHz processors with 16 GB of RAM on an average desktop these days.

Having said that, yes, it’s definitely a good idea to do your networking on a separate thread, even in a game.From: bob@pendleton.com (Bob Pendleton)
Subject: Re: [SDL] SDLNet_TCP_Send blocking

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a good
idea.

Why not?

When you want to quit the game, a send() is pending, and you join your
thread, that will finish when exactly? Possibly once the TCP
connection times out, after a few minutes. Eurgh.

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

I’d say that with game networking, you often want to do proper flow
control to keep latencies low. What I mean by this is that you
probably want to use the sockets in non-blocking mode anyway, so that
you know when there’s “push back” and that the network link can’t keep
up, and write less often, so that you avoid queueing a lot (if you
consider it sent, but it’s actually in a queue somewhere, it’s not
actually sent, and that’s no good for latency (lag!). One might
actually want to shrink the kernel socket buffers, in some
situations…On Thu, Mar 26, 2009 at 3:33 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote:


http://pphaneuf.livejournal.com/

2009/3/26 Mason Wheeler :

Every time I hear someone make the old “Moore’s Law lets you get away with
writing inefficient code” argument, I can’t help but wonder what rock
they’ve been living under for the past six years.? (And yes, I’m quite aware
that that article’s about the benefits of multi-threading.? That’s not the
point.)? Moore’s law is no longer with us and we can’t depend on it for help
anymore.? If it was, we’d be running on ~40 GHz processors with 16 GB of RAM
on an average desktop these days.

Even a 40 GHz CPU won’t make packets come over the Internet faster.
Since most of networking is the fine art of “sleeping as much as
possible”, a single thread is just fine. :-)–
http://pphaneuf.livejournal.com/

2009/3/26 Bob Pendleton :

Asynchronous IO from Boost.org. If you do not know about Boost.org, go there
and be happy.

Oh, that might actually a better version of my “use WvStreams” answer,
Boost.Asio is more modern and has more people looking at it nowadays
(although every mainstream Linux distribution that I know of comes
with WvStreams as part of the base package load, so that’s a lot of
places!).–
http://pphaneuf.livejournal.com/

Yeah, I’m aware of that. I just wanted to call BS on an argument indicative of a very bad mindset that nobody ought to be trotting out anymore in this day and age. (At least not when talking about traditional desktop/laptop systems. Not sure how things are going on embedded platforms.)>----- Original Message ----

From: Pierre Phaneuf
Subject: Re: [SDL] SDLNet_TCP_Send blocking

Even a 40 GHz CPU won’t make packets come over the Internet faster.
Since most of networking is the fine art of “sleeping as much as
possible”, a single thread is just fine. :slight_smile:

That is a problem. Esp. when you just have one single thread for the
networking stuff. That means that if one tasks blocks it on your server, you
will not receive any data anymore from any other client in the meantime. (Or
you cannot do whatever that is what the thread is doing.)

That’s why you have to multiplex using non-blocking I/O and select()
(or WSAAsyncSelect, on Windows, much nicer, or epoll on a more current
Linux, but select() is the baseline choice that works everywhere).

For example, until I stumbled to the problem that closesocket() is blocking,

Only if your socket is not in non-blocking mode. Or if you have
SO_LINGER enabled. Don’t enable SO_LINGER, use shutdown() with a
proper handshake, and after a timeout, just closesocket() (“not
nice”).On Thu, Mar 26, 2009 at 3:49 PM, Albert Zeyer <albert.zeyer at rwth-aachen.de> wrote:


http://pphaneuf.livejournal.com/

2009/3/26 Mason Wheeler

This is an old old argument. I raised it many years ago. I ran in to it
while writing NET2 (you can find it at gameprogrammer.com). The answer is
that some platforms do not support non-blocking writes to a socket. SDL has
always gone with the lowest common denominator.

NET2 uses one (1) thread to create an event based layer on top of SDL and
SDLNet. At the time, I really hated the idea of using a pair of threads per
socket to handle I/O, so I didn’t. But, Moore’s Law has cranked over several
times since then and multicore processors are the norm, there is no longer
a good reason not to use many threads in and I/O library. There is nothing
wrong with having a thread block for ages. Really, nothing at all.

Every time I hear someone make the old “Moore’s Law lets you get away with
writing inefficient code” argument, I can’t help but wonder what rock
they’ve been living under for the past six years.http://www.gotw.ca/publications/concurrency-ddj.htm
(And yes, I’m quite aware that that article’s about the benefits of
multi-threading. That’s not the point.) Moore’s law is no longer with us
and we can’t depend on it for help anymore. If it was, we’d be running on
~40 GHz processors with 16 GB of RAM on an average desktop these days.

Just last week I went through the exercise of looking up the original
definition of Moore’s law and then used published data on the number of
transistors per processor chip in Intel processors covering the time
period from 1971 to 2008 and I can tell you that Moore’s law is still very
healthy and still cranking along. (I’m writing an article on the emergence
of the giga-age of computing.) The number of transistors on a processor chip
has increased at the average rate of sqrt(2) per year.

The fact that you think that Moore’s law has something to do with cycle
times indicates to me that you do not understand Moore’s law. That doesn’t
surprise me, I had it wrong too and I have written other papers on the
subject. Moore’s law is only about the number of transistors on chip, not
about the speed of those transistors.

Also, the implication that I think that Moore’s law allows you to write
inefficient code is just plain insulting. Moore’s law changes computing so
that what is efficient at one turn of the crank is no longer effiecient at
the next turn of the crank. Because current chip designs have pretty well
wrung all the performance they can from super scalar processors Moore’s law
gives us more cores at each turn of the crank instead of deeper pipelines
and hyperthreading. That means that threads are now the effiecient way to
get things done.

Bob Pendleton> From: Bob Pendleton <@Bob_Pendleton>

**Subject: Re: [SDL] SDLNet_TCP_Send blocking

Having said that, yes, it’s definitely a good idea to do your networking on
a separate thread, even in a game.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

2009/3/26 Bob Pendleton :

The answer is that some platforms do not support non-blocking writes to
a socket.

Which platform is that?!? I know of some platforms that only
supports non-blocking writes, actually! But you wouldn’t want to port
SDL_net there. :wink:

There is nothing wrong with having a thread block for ages. Really, nothing
at all.

As long as you can tell it to unblock when you want to join it at
program termination time…–
http://pphaneuf.livejournal.com/

I cannot find any way around this problem. Even if I was to use threads,
this would result in a thread blocking for ages, which isn’t really a
good

idea.

Why not?

When you want to quit the game, a send() is pending, and you join your
thread, that will finish when exactly? Possibly once the TCP
connection times out, after a few minutes. Eurgh.

When the game is over, why would you bother to join the threads? When you
are done you are done. Just die.

Bob PendletonOn Thu, Mar 26, 2009 at 2:55 PM, Pierre Phaneuf wrote:

On Thu, Mar 26, 2009 at 3:33 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote:

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

I’d say that with game networking, you often want to do proper flow
control to keep latencies low. What I mean by this is that you
probably want to use the sockets in non-blocking mode anyway, so that
you know when there’s “push back” and that the network link can’t keep
up, and write less often, so that you avoid queueing a lot (if you
consider it sent, but it’s actually in a queue somewhere, it’s not
actually sent, and that’s no good for latency (lag!). One might
actually want to shrink the kernel socket buffers, in some
situations…


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------