Big guru R.W.Stevens says that connected UDP sockets have significantly
less overhead (the kernel seems to connect implicitly for each sendto which
can be avoided by this). (But maybe that’s not a problem with winsock and
maybe current Linux has solved this, after all, the books have been written
years ago? Someone knows more?)
He recommends having one socket per connection. So I would think:
One primary socket on a well-known port on the server.
Client sends a packet. Server reads and opens a new socket on a different
port and connects it to the client, sends answer from there.
Client connects to the new socket. So both sockets are connected.
That way select() multiplexing tells you who wants to talk to you. The
maximum number of open sockets can be queried by WSAstartup() on windows
(somewhere in the library, is 128 on my '98 box).
Ugh. I don’t know who R.W.Stevens is, but logic tells me that he’s just plain
wrong.
First of all, if there is any overhead with unconnected UDP sockets, it seems
to be a problem in one particular kernel, and as you say yourself should be
eliminated by now. This problem may or may not exist (it’s likely to not
exist I should think) in other kernels. Since SDL is all about writing
portable code, you shouldn’t optimize for a single crappy system, especially
when this “optimization” results in a general decrease of efficiency.
Firstly, using multiple sockets implies an OS-defined limit on the number of
clients you can have.
Secondly, select() and poll() reportedly don’t scale well with a huge number
of sockets to wait on. Using a single socket and then looking up the client
by remote address using binary search is much more efficient.
Thridly, the logon process you lined out is just ugly and has potential proxy
/ NAT problems.
Depending on the implementation, there may be an advantage for multiple
sockets though. The OS might maintain seperate packet queues for each socket,
which could lead to a bigger overall packet queue with multiple sockets (so
you drop less packets under load).
Unfortunately, I have never run a MMO server where that could be an issue.
Maybe someone has more information on that topic?
On top of that some kind of protocol is needed to acknowledge sendings and
resend lost packets.
I am actually planning to implement this, but not with SDL_net, just bare
sockets (does not work on Mac then, but is only 100-200 lines of C). At the
moment I am thinking about the protocol logic which would also be needed on
top of SDL_net with UDP. Someone has a ready-to-use recipe for this?
This is what Quake and Half-Life use, and what I’ve implemented into my own
game (more or less). The protocol supports both reliable and unreliable data,
however it does not (for simplicity) allow data to arrive out of order, as
this would make the processing of commands unnecessarily complex anyway, and
the focus of the protocol is on unreliable data.
Every packet is prefixed with a short header, which includes a sequence (SEQ)
and an acknowledge (ACK) number. The highest two bits of these numbers are
reserved for other purposes.
Whenever a packet is sent, the SEQ is increased by one. The sent ACK is
always the last received sequence number. Unreliable data is just sent like
this.
Whenever there’s reliable data to send, a bit in the SEQ is set, indicating
reliable data. If the packet only contains part of a block of reliable data,
a second bit is also set.
After a packet with reliable data has been sent, the sender will not send
anymore packets with reliable data until the previous one is acknowledged.
On the recieving end, packets with illogical SEQ/ACK (this includes old SEQs)
are silently ignored. This will prevent most IP spoofing attacks.
If the packet doesn’t contain reliable data, its contents are simply
forwarded to the game logic.
If, on the other hand, it does contain reliable data, a number of things is
done:
First of all, the protocol keeps track of a “reliable sequence”. The reliable
sequence is a single bit which is flipped whenever a reliable packet is
recieved. This reliable sequence is sent together with the ACK of a packet.
If a party receives a flipped reliable sequence bit, it knows that the
previous reliable packet has been received.
Secondly, if there is more reliable data to wait for (second bit in ACK is
set), the data is stored for later. Otherwise, all the reliable data received
up to now is forwarded to the game logic.
Unreliable packets are never resent (obviously)
Reliable packets are resent when we recieve an ACK greater than the original
SEQ number of the reliable packets without a change of the reliable sequence
bit.
Additionally, the protocol forces an empty packet every 500ms if no other
data is being sent.
Whenever a reliable packet is sent, the code looks for unreliable data that
might be put into the packet as well. The receiving game logic doesn’t have
to differentiate between reliable and unreliable data.
This protocol works pretty well for a shooter situation: both sides keep
sending data; the server sends frame updates at a constant rates, and the
client regularly sends movement commands. All these events are unreliable
(note that a client’s movement command also contains the two previous
movement commands as backup).
The occasional reliable data will eventually reach the other side without
impacting overall performance.
However, the maximum reliable data bandwidth is rather low, because no new
reliable data is sent until the previous block is acknowledged. So if you
really need lots of reliable data, you need to come up with something else.
cu,
NicolaiAm Sonntag, 17. Februar 2002 10:36 schrieb Carsten Burstedde: