We have a hardcoded ‘5’ in the listen() call in SDLNet_TCP_Open(), so if
you have more than 5 people that have connected for which you haven’t
yet called SDLNet_TCP_Accept(), they’ll get a “connection refused” error
(or on some OSes, they’ll just retry up to a certain time limit). I
don’t believe there’s an actual maximum number of accepted connections
beyond what the OS allows, so just be sure to check for new connections
once per frame and you shouldn’t have an issue.
That’s interesting. My application suffers from the fact many
connections may arrive at once on a host. I guess it could be easy to
hack SDL_net’s source to increase this limit to reach at least the
OS’s limit.
I believe this is OS-dependent for UDP, and for TCP they never get
dropped, since you have to have a reliable stream here.
On TCP, packets do get dropped, but it is the OS’s job to retransmit
them. This is called congestion. On a very busy host, when lots of
packets arrive in disorder, the host has to hold all that unordered
data until the missing pieces arrive so the server can reorder the
stream. The amount of data the server can hold unordered is the TCP
window. But because the window is set for “the perfect world” and in
our world windows do get full and use a lot of resources. Many
protocols have implemented a congestion-window on the sender side,
that is to send data at the maximum rate until packets start being
dropped (still talking about TCP here) then the congestion-window is
lowered. This just means the client understands the server window
adverstised may not equal the real-time window on the server.
However, and I’m just guessing here, but there will probably come a
point where the OS tells the remote host that its internal buffers are
full and the remote host has to block and continue to resend until it
gets through…this happens transparently to the apps in blocking mode
and probably just gives a short write on the remote host for
non-blocking mode.
Exactly, but my application needs to be aware of all this, I will
certainly have to use UDP to catch those problems and deal with them
my way.
Just read data every frame and you almost certainly won’t run into this
issue.
My problem is the server of my application will get lots to process
every frame. I may expect some situations (maybe not so rare) where
every frame will take several seconds, maybe even more. During this
time, lots of trouble can happen as you may understand. Design-wise,
there is not much we can do about this except balance all that load
onto a different server, but this is just “distributing” the problem
and not solving it.
UDP packets aren’t guaranteed to arrive, so I assume if the OS runs out
of internal space, it either drops the oldest queued packet or drops the
newly-arrived one until you start reading again. Again, just read all
new data every iteration and you’ll be fine.
I guess this is done OS-dependantly. To my application, it doesn’t
matter much, the fact is: 1) packets will get dropped, no matter
which, 2) those dropped packets will be resent 3) packets will be
resent until succesfully received.
Basically, my application will be compiled for different OS and I will
not be able to avoid the OS-dependent characteristics. The best I can
do is to make the application aware of the limits of its system.
Thanks,
Simon