[Framework] Event-based Networking

Glocke wrote:

Nathaniel J Fries wrote:

Anyway, the cleverness I talked of involves using non-blocking I/O on sockets (via SDLNet_SocketReady() IIRC) and SDLNet_CheckSockets() to drive a single network thread. You could optimize this some with a thread-per-processor and doing some load balancing, but ultimately it would still fall well short of what is provided by I/O Completion Ports, epoll, kqueue, or whatever other operating systems provide, and which boost.asio cleverly wraps.

Actual I am using non-blocking TCP sockets by using SocketReady and CheckSockets. But I tried to reduce massivly thread-usage in the first place to enable easier debugging. But you’re right: using some more threads would enable improved performance on multi-processor computers. But what’s a suitable number of threads (server-sided): one for accepting clients, and one per client? Or is it too much? From the point of multi-processor computer it might be simple: one thread per client. But does this have a negative effect on single-processor devices? I don’t know :?

One-per-processor should be the maximum used for I/O. Any more than that and you’re getting zero gain (the OS most likely cannot perform more than one I/O operation at a time, non-blocking or otherwise), and actually losing some due to context-switching overhead and (possibly) extra synchronizations.
Worker thread(s) could, and should, be used by any other server activity (such as actually implementing the gameplay).

The thread-per-client model is a bad model for any highly-interactive program. Like I said, synchronization bottlenecks. The key is to do as little synchronizing as possible; which means using a specific thread for a specific task and minimizing thread interaction.------------------------
Nate Fries

Nathaniel J Fries wrote:

The thread-per-client model is a bad model for any highly-interactive program. Like I said, synchronization bottlenecks. The key is to do as little synchronizing as possible; which means using a specific thread for a specific task and minimizing thread interaction.

My current approach is:

  • n clients are handled by 2*n worker threads (one for sending, one for receiving) on the server
  • each worker-thread does access either the worker’s outgoing queue (for popping events to send) or the server’s incomming queue (to enqueue new events to a main queue)
  • the later model will pop events from the server’s incomming queue (that means events that arrived from different workers) and handle it
  • the model also will push events (so-to-say “answers”) to the worker’s outgoing queue (for sending them).

The number of threads is definitly larger then the number of processors. Do you have an idea where mit bootlenecks might be? I’m not sure.

Example:

 the worker's receiving loop got an event and tries to push it to the server's incomming queue.

 another worker's receiving loop is doing the same and has to wait for an unlocked queue.

 more receiving loops will also wait. but the sending loops are still able to send outgoing data.

 but also the model has to wait for an unlocked queue to pop an event from it

I’m not sure whether this is already a bottleneck or just close-to-a-bottleneck… what’s your opinion?

Kind regards
Glocke[/list]

Glocke wrote:

Nathaniel J Fries wrote:

The thread-per-client model is a bad model for any highly-interactive program. Like I said, synchronization bottlenecks. The key is to do as little synchronizing as possible; which means using a specific thread for a specific task and minimizing thread interaction.

My current approach is:

  • n clients are handled by 2*n worker threads (one for sending, one for receiving) on the server
  • each worker-thread does access either the worker’s outgoing queue (for popping events to send) or the server’s incomming queue (to enqueue new events to a main queue)
  • the later model will pop events from the server’s incomming queue (that means events that arrived from different workers) and handle it
  • the model also will push events (so-to-say “answers”) to the worker’s outgoing queue (for sending them).

The number of threads is definitly larger then the number of processors. Do you have an idea where mit bootlenecks might be? I’m not sure.

I’m not sure whether this is already a bottleneck or just close-to-a-bottleneck… what’s your opinion?

Kind regards
Glocke[/list]

Again, it depends on what the server actually does.

Example:
Most HTTP servers are designed with a thread-per-client (or, more accurately, a process-per-client; but that’s essentially the same thing) and easily handle thousands of simultaneous connections. Because an HTTP server simply accepts a connection, reads a request, grabs a file, and sends the content (possibly after processing – see PHP, JSP, ASP, etc); the only interaction between threads/processes occurs when they read the same file, which doesn’t actually require synchronization (reading data only requires synchronization when a write is happening to the same data, which is quite rare for an HTTP server). There is no synchronization bottleneck with an HTTP server.
(Fun Fact: Many developers of HTTP and similar servers have picked on me for endorsing the event-driven alternatives, claiming its unnecessarily complex with no benefit – which is probably true for HTTP servers)

You should never need more than one thread per client.

  1. Your use of a separate reader and writer thread for each client is a waste of resources (each thread necessarily has its own copy of all CPU registers, stored in memory when the thread isn’t running; and also its own stack, which is typically at least 1Kb in size; and most modern Operating Systems also provide a feature to have thread-local copies of data, which if used (note: A modern C or C++ runtime will use this feature for its and implementations; Glibc and Visual C++'s multithreaded runtime both do, at least) means additional memory needed by ALL threads regardless of whether or not the thread-local will ever be used on that thread. The result is several wasted MB of memory (if you have, say, 100 threads)!
  2. read and write (recv and send) always interfere with eachother. Either they are inherently mutually exclusive (meaning the read thread will block the write thread and the other way around), or you run the risk of having a race condition, which means undefined (and, because of the nature of concurrent execution, unpredictable) behavior. I’m honestly not sure if any systems do synchronize sockets internally.
  3. If you’re already using non-blocking I/O and polling to see which is ready for reading, you might as well use a thread-per-processor (since SDL, and most other portable libraries, don’t offer a way to detect the number of processors; you can typically assume 4, 8, or 16; depending on whether you’d rather be conservative, scalable, or balanced between the two) because you’re already basically using the same mechanism in each individual thread anyway.
    So, not only is it a waste of resources, but it is also a pointless (and possibly destructive) endeavor.

If you do want to stick with the thread-per-client model (I hardly blame you – it is the greatest extent covered by most network programming knowledge bases and still used just about everywhere that isn’t an MMORPG), and you have this queue for a worker thread, then you should implement the queue as a wait-free queue. Wait-free queues are highly scalable and would reduce the synchronization bottleneck concern. A general-purpose wait-free queue is described by Kogan and Petrank (see: http://www.cs.technion.ac.il/~sakogan/papers/ppopp11.pdf ), but a specialized wait-free queue for just this specific case (using boost’s atomic library, designed after the C++11 atomics API) is described in the boost documentation (see: http://www.boost.org/doc/libs/1_53_0/doc/html/atomic/usage_examples.html#boost_atomic.usage_examples.mp_queue ).------------------------
Nate Fries

This could be useful, actually:
http://wiki.libsdl.org/moin.fcg/SDL_GetCPUCount

Jonny DOn Wed, Mar 20, 2013 at 3:21 PM, Nathaniel J Fries wrote:

**

Glocke wrote:

Nathaniel J Fries wrote:

The thread-per-client model is a bad model for any highly-interactive
program. Like I said, synchronization bottlenecks. The key is to do as
little synchronizing as possible; which means using a specific thread for a
specific task and minimizing thread interaction.

My current approach is:

  • n clients are handled by 2*n worker threads (one for sending, one for
    receiving) on the server
  • each worker-thread does access either the worker’s outgoing queue (for
    popping events to send) or the server’s incomming queue (to enqueue new
    events to a main queue)
  • the later model will pop events from the server’s incomming queue (that
    means events that arrived from different workers) and handle it
  • the model also will push events (so-to-say “answers”) to the worker’s
    outgoing queue (for sending them).

The number of threads is definitly larger then the number of processors.
Do you have an idea where mit bootlenecks might be? I’m not sure.

I’m not sure whether this is already a bottleneck or just
close-to-a-bottleneck… what’s your opinion?

Kind regards
Glocke[/list]

Again, it depends on what the server actually does.

Example:
Most HTTP servers are designed with a thread-per-client (or, more
accurately, a process-per-client; but that’s essentially the same thing)
and easily handle thousands of simultaneous connections. Because an HTTP
server simply accepts a connection, reads a request, grabs a file, and
sends the content (possibly after processing – see PHP, JSP, ASP, etc);
the only interaction between threads/processes occurs when they read the
same file, which doesn’t actually require synchronization (reading data
only requires synchronization when a write is happening to the same data,
which is quite rare for an HTTP server). There is no synchronization
bottleneck with an HTTP server.
(Fun Fact: Many developers of HTTP and similar servers have picked on me
for endorsing the event-driven alternatives, claiming its unnecessarily
complex with no benefit – which is probably true for HTTP servers)

You should never need more than one thread per client.

  1. Your use of a separate reader and writer thread for each client is a
    waste of resources (each thread necessarily has its own copy of all CPU
    registers, stored in memory when the thread isn’t running; and also its own
    stack, which is typically at least 1Kb in size; and most modern Operating
    Systems also provide a feature to have thread-local copies of data, which
    if used (note: A modern C or C++ runtime will use this feature for its **and
    ** implementations; Glibc and Visual C++'s multithreaded runtime both do,
    at least) means additional memory needed by ALL threads regardless of
    whether or not the thread-local will ever be used on that thread. The
    result is several wasted MB of memory (if you have, say, 100 threads)!
  2. read and write (recv and send) always interfere with eachother. Either
    they are inherently mutually exclusive (meaning the read thread will block
    the write thread and the other way around), or you run the risk of having a
    race condition, which means undefined (and, because of the nature of
    concurrent execution, unpredictable) behavior. I’m honestly not sure if any
    systems do synchronize sockets internally.
  3. If you’re already using non-blocking I/O and polling to see which is
    ready for reading, you might as well use a thread-per-processor (since SDL,
    and most other portable libraries, don’t offer a way to detect the number
    of processors; you can typically assume 4, 8, or 16; depending on whether
    you’d rather be conservative, scalable, or balanced between the two)
    because you’re already basically using the same mechanism in each
    individual thread anyway.
    So, not only is it a waste of resources, but it is also a pointless (and
    possibly destructive) endeavor.

If you do want to stick with the thread-per-client model (I hardly blame
you – it is the greatest extent covered by most network programming
knowledge bases and still used just about everywhere that isn’t an MMORPG),
and you have this queue for a worker thread, then you should implement
the queue as a wait-free queue. Wait-free queues are highly scalable and
would reduce the synchronization bottleneck concern. A general-purpose
wait-free queue is described by Kogan and Petrank (see:
http://www.cs.technion.ac.il/~sakogan/papers/ppopp11.pdf ), but a
specialized wait-free queue for just this specific case (using boost’s
atomic library, designed after the C++11 atomics API) is described in the
boost documentation (see:
http://www.boost.org/doc/libs/1_53_0/doc/html/atomic/usage_examples.html#boost_atomic.usage_examples.mp_queue).


Nate Fries


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Jonny D wrote:

This could be useful, actually:?http://wiki.libsdl.org/moin.fcg/SDL_GetCPUCount (http://wiki.libsdl.org/moin.fcg/SDL_GetCPUCount)

Jonny D

I stand corrected!

Still, if for whatever reason SDL cannot determine the number of threads, SDL_GetCPUCount will return a very conservative guess of 1. Which means, if SDL_GetCPUCount returns 1, you should again refer to my suggested assumptions on the server (since any modern server will have at least 4 CPUs), while sticking with SDL’s guess on the client (where one and two cpu machines are still the most common).------------------------
Nate Fries

Message-ID: <1363660078.m2f.36189 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Mutexes are the source of the bottleneck (using binary semaphores, critical
sections, condition variables, or spinlocks will have the same effective
result, and the same effective problem). Mutex is short for “mutual
exclusion”, so when I said “mutually exclusive code”, I literally meant
"code that uses mutexes".
If you don’t use mutexes, you wouldn’t have a bottleneck, you’d have
something way worse – code that compiles just fine but doesn’t work at
all!

Other than quick locking implied in even “lockless” code, I’m
challenged to think of a situation where I would want MMO server-side
networking code to have any mutual exclusion that wasn’t inate in
malloc() & co (because, really, what should those threads be doing
other than reading, writing, atomic swaps to and from queues, and
reference-counting?). Since I actually have some vaguely defined
future intentions down this road, could you throw out one or two
simple examples?> Date: Mon, 18 Mar 2013 19:27:59 -0700

From: “Nathaniel J Fries”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [Framework] Event-based Networking

Date: Tue, 19 Mar 2013 01:54:42 -0700
From: “Glocke”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [Framework] Event-based Networking
Message-ID: <1363683281.m2f.36194 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

/EDIT:

Please reply to your post instead, so the mailing list will see your
message.

There are some questions in the context of SDL_net:

Do I have to consider Endianness?

Should I use fix-sized-types for my events (e.g.
http://en.cppreference.com/w/cpp/types/integer, might be important for using
32-bit and 64-bit systems in one communication) ?

Or should I use serialization to avoid both possible issues?

Date: Wed, 20 Mar 2013 08:07:16 -0700
From: “Glocke”
To: sdl at lists.libsdl.org
Subject: [SDL] [SDL_net] Endianness
Message-ID: <1363792035.m2f.36227 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Hi, do I have to consider byte orders while working with TCPsocket in case
of SDLNet_TCP_Recv and SDLNet_TCP_Send. If yes: what’s the way to handle
endianness in this case?

Kind regards
Glocke

I’d say yes to all three, but it actually has nothing to do with
SDL_net. In fact, it applies to ALL transfer of data outside of a
program (unless you’re transfering it to a fork of the current
instance, in which case just do what makes sense in your case).

  1. Proper networking code should (at least in my view) always convert
    outgoing data to network-order, and incoming data to host-order: and
    do the same if writing to disk. Hard drives can usually be swapped out
    to different machines, and a problem that apparently used to pop up
    between PCs and Macs (albeit, this is massively not the only problem
    that came along) is that they often used different byte-orders. By
    always converting out-going data to network order, and doing the
    opposite to incoming data, you ensure that byte-order will never
    interfer with data transmission.

  2. There is currently the question of whether to choose 32-bit or
    64-bit code. For a server the correct choice will presumably always be
    64-bit, but for a client the correct choice will vary widely. By using
    fixed-size types you ensure that even if you change your mind later
    (or if you choose one size for the server and anther for the client)
    you won’t break anything, because the sizes were always explicitly
    specified from the start. Once again, this applies to disk I/O as
    well.

  3. The details of layout of elements in a structure vary depending on
    the compiler, and what options it was given. Further, structures can
    quite easily have unused space littered around themselves. Thus, by
    using actual serialization code you can both reduce the chances of
    accidentally introducing errors and incompatibilities, AND waste less
    bandwidth (which is itself important with the rise of mobile
    computing). This, too, applies to disk I/O. There are cases in major
    projects such as Linux where this is done, but those projects tend to
    use compiler-specific methods to specify the correct layout their data
    structures.

As for the correct way to implement all of this, the serialization
code calls the endian-conversion functions, and from there you’re set.

By the way, this applies to everything from games, to word-processors,
to databases. It’s a specialization-agnostic rule-of-thumb.

Date: Wed, 20 Mar 2013 12:49:38 -0700
From: “Nathaniel J Fries”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [SDL_net] Endianness
Message-ID: <1363808978.m2f.36233 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Yes, SDL_net has no knowledge of the endianness of the buffers you send and
receive.

Fortunately, SDL provides some excellent, optimal functions for reversing
the order of bytes in SDL_endian.h

As a general rule, you have three options for transmission (either file or
network) endianness:

  1. Assume most machines using the transmitted data will be little-endian (a
    good general rule now that everyone is using x86)
  2. Convert to network byte order, which is big-endian.
  3. (Less practical): Have the client tell you its endianness, and convert to
    to client endianness before sending and to native endianness after
    receiving.

Obviously, I favor option 2 ;).

Date: Tue, 19 Mar 2013 03:34:05 -0700
From: “Glocke”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [Framework] Event-based Networking
Message-ID: <1363689244.m2f.36196 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

/EDIT2:

Jared Maddox wrote:

I suggest redesigning the thread api to
resemble the one provided by C++11, so that your code can be quickly
and easily used as a replacement for the C++11 library on platforms
that support SDL, but don’t have C++11 thread support implemented.

Well, I would prefer wrapping my Threading around C++11-Threading. At the
moment my framework requires a lot C++11-stuff. So using C++11-Threading
seems meanigful to me.

What level of C++11 stuff does it actually require? If it’s library
support, well, the threading portion can be implemented with a wrapper
class for SDL’s threading facilities, as the code I linked to
demonstrates. If C++11 features (I’m specifically thinking of variadic
templates) is required, then I strongly suggest wrapping it in some
preprocessor ifs that test for C++11 compliance. Some workplaces
require older compilers to be used (though I would at least HOPE not
in the gaming industry), so actually relying on new features could
cause you NEAR-TERM problems (long-term, those problems will hopefully
go away).

Date: Wed, 20 Mar 2013 01:20:16 -0700
From: “Glocke”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [Framework] Event-based Networking
Message-ID: <1363767616.m2f.36221 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Nathaniel J Fries wrote:

The thread-per-client model is a bad model for any highly-interactive
program. Like I said, synchronization bottlenecks. The key is to do as
little synchronizing as possible; which means using a specific thread for
a specific task and minimizing thread interaction.

My current approach is:

  • n clients are handled by 2*n worker threads (one for sending, one for
    receiving) on the server

Wow, yikes. Listen to Nathaniel. If the OS supports hooks for green
threads/fibers/etc., then you MIGHT justify allocating THOSE in the
way you just described, but networks tend to be much slower than the
computers that access them, so unless you’re targeting a specific OS
where you know that the OS will do x, y, and z optimal things if you
do things u, v, and w, then you should never provide more than one
networking thread per client in ANY server, whether gaming or
otherwise.

A bit of an expantion on Nathaniel’s explanation, specialized to x86
but relevant elsewhere as well, is this:
Each process and/or thread has certain blocks of memory that it can
see, some data that ONLY it can see (such as registers), and some data
that it can’t see which the OS uses to keep track of it. Each time
that the processor switches to a different process it muist change out
the register data, and change out the data that the OS uses to
describe the process and thread. When the processor changes threads it
doesn’t have to swap out as much of the OS-visible data, but it does
have to change out some (and in the case of Linux, all) of it, and it
has to change out the registers. In addition to that, the OS may need
to move large blocks of memory out to the disk, and move others back
in, in order to actually run the “new” process/thread. All of this
takes up data-transfer resources, and therefor will ALMOST GUARANTEED
result in a decrease in execution speed each time that it’s done. By
doing as much network I/O in one thread as possible you reduce this
speed cost by reducing the number of process/thread swaps required by
a given set of I/O operations, and also reduce the chances of
requiring disk swaps by simply reducing the amount of data that your
program needs.

Date: Wed, 20 Mar 2013 13:34:44 -0700
From: “Nathaniel J Fries”
To: sdl at lists.libsdl.org
Subject: Re: [SDL] [Framework] Event-based Networking
Message-ID: <1363811683.m2f.36237 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Jonny D wrote:

This could be useful,
actually:?http://wiki.libsdl.org/moin.fcg/SDL_GetCPUCount
(http://wiki.libsdl.org/moin.fcg/SDL_GetCPUCount)

Jonny D

I stand corrected!

Still, if for whatever reason SDL cannot determine the number of threads,
SDL_GetCPUCount will return a very conservative guess of 1. Which means, if
SDL_GetCPUCount returns 1, you should again refer to my suggested
assumptions on the server (since any modern server will have at least 4
CPUs), while sticking with SDL’s guess on the client (where one and two cpu
machines are still the most common).

It is good to provide a command-line override, though! That way you
can both use a “crutch” if needed (let’s say you’re using a Haiku
port, but it isn’t complete; or you have something else that you want
on one of thoe cores; whatever), and (possibly more importantly) so
that you can run some tests for correctness (has Intel cancelled it’s
50-core chip plan, or is it still moving forward with it :wink: ).

Hi,

I will redesign my “thread-use”:

 Server: 1 recv-thread (including incomming-client-accept), 1 send-thread

 Client: 1 thread for send-&-recv

This solution should be ok for single-/dualcore-clients, because there is more do to for the cpu than just networking: input handling, client prediction and maybe also rendering (in case of non-opengl). And this solution might also be ok for multicore-servers, because they also have to do even more than just networking: pathfindung, collision detection/handling, ai-behaviour and many more. I think in case of these tasks it would be wise to use the cpu’s capacity by handling the received event with e.g. a thread loop. Also a server can also run on a desktop machine (e.g. age of empires, diablo and many more). Making the size of the thread-pool up to the number of cpu-cores might be a solution (if I find out about the actual number of cpu-cores ^^)

Kind regards
Glocke :slight_smile: [/list]

Glocke wrote:

I will redesign my “thread-use”:

Server: 1 recv-thread (including incomming-client-accept), 1 send-thread

Client: 1 thread for send-&-recv

In the case of a multi-core server without using system-specific I/O routines (kqueue, epoll, IOCP, etc), this is probably the simplest setup, if a little bit conservative.
I should note that on most modern systems, send() with a packet <1kb isn’t non-blocking even if a non-blocking socket is used. Sending multiple packets together in “mega” packets is the optimal solution. If you absolutely need to send packets <1kb immediately (say, for a FPS), you can just pad the remaining bytes with garbage.

For the client, since this is seemingly just supposed to be a network framework, you should not create any threads at all. Often, the cost of programming (and debugging) time in ensuring proper synchronization is not worth the marginal improvement in performance; and over-synchronization (wrapping way more code in locks than needs to be) may even impair performance.------------------------
Nate Fries

An edit to the above: Apparently you can get around that with the TCP_NODELAY socket option, and SDL_net does this by default. You learn something new everyday!
Still, nagling is done for a reason, and I happen to think this is bad behavior on behalf of SDL_net (SDL_net should provide a function or an extra argument to enable this instead).

However, I’d still highly recommend the mega-packet. One thing I’ve done before is send vital packets with the mega-packet as the first packet.------------------------
Nate Fries

Hey guys!

I changed some parts in the past, e.g. using the C++11 implementation of mutex and thread directly and redesigning client and server class to reach clean code. Currently migration from SDL 1.2 to 2.0 is planned. But afaik SDL_net wasn’t ported to 2.0, yet. So I think about using Boost Asio instead SDL_net (for client and server).

Also I removed binary structure sending and replaced it by serialization. I implemented a small JSON class for this purpose.

You can find the code at https://github.com/cgloeckner/networking - as you already might know. Feel free to comment on my code. I’d like to discuss even small (but eventually critical) parts of my implementation to reach a maximum of efficiency. Also testers and interested developers (e.g. using the framework in their own projects) are welcome.

I’m looking forward to all kinds of criticism.

Kind regards
Glocke

Hi :slight_smile:

Some days ago, I rewrote my networking framework to make it less depending on the actual socket library (e.g. SDL_net). So I made a design using some interfaces and abstract classes. The result can be found on GitHub:

https://github.com/cgloeckner/netLib

As I mentioned, it doesn’t depend on a specific socket library. This is reached by offering interfaces which you can implement when writing a socket wrapper for the framework. An example is also done on GitHub - of course for SDL_net’s TCPsocket [Wink]

The old repository (if you even knew it until now) is not longer maintained. So forget about it (or don’t start mentioning it anyway^^). I’ll mantain this new repository for future.

Feel free to expore the code and highlight possible problems if you like to - would be great! There is also an example explaining the basic usage of the framework by using a specific socket implementation.

Kind regards
Glocke

2014-02-02 Glocke :

Hi [image: Smile]

Some days ago, I rewrote my networking framework to make it less depending
on the actual socket library (e.g. SDL_net). So I made a design using some
interfaces and abstract classes. The result can be found on GitHub:

https://github.com/cgloeckner/netLib

As I mentioned, it doesn’t depend on a specific socket library. This is
reached by offering interfaces which you can implement when writing a
socket wrapper for the framework. An example is also done on GitHub - of
course for SDL_net’s TCPsocket [image: Wink]

The old repository (if you even knew it until now) is not longer
maintained. So forget about it (or don’t start mentioning it anyway^^).
I’ll mantain this new repository for future.

Feel free to expore the code and highlight possible problems if you like
to - would be great! There is also an example explaining the basic usage of
the framework by using a specific socket implementation.

Kind regards
Glocke

Whenever I hear “network programming” and “event based”, I have to think
of enet [1], have you looked into that before?

[1] http://enet.bespin.org/

No I didn’t, yet. But it seems to be a UDP-based framework. Because I want to use a TCP-based connections (or at least: I don’t want to “forbid” me using them) it does not suit my requirements. Imho “[…] optional reliable […]” (see enet doc) seems not to be such a good idea when using UDP, because TCP suits this requirement nativly. All in all ENet seems to be a great, tiny framework for solving typical UDP-stuff.

About these events … I’m not shure, what’s the deal with ENet and handling events, yet - because I did’nt read the entire documentation. What I like to solve is triggering functions/methods via network by sending packages with an ID (determining the function/method) and some data (used as parameters). The reason why I’m talking about “events” is related to GUI-events (calling some callbacks with some data). That’s all I tried to realize :slight_smile:

So sorry for this possibe misanderstanding :slight_smile:

2014-02-02 Glocke :

No I didn’t, yet. But it seems to be a UDP-based framework. Because I
want to use a TCP-based connections (or at least: I don’t want to "forbid"
me using them) it does not suit my requirements. Imho “[…] optional
reliable […]” (see enet doc) seems not to be such a good idea when using
UDP, because TCP suits this requirement nativly. All in all ENet seems to
be a great, tiny framework for solving typical UDP-stuff.

About these events … I’m not shure, what’s the deal with ENet and
handling events, yet - because I did’nt read the entire documentation. What
I like to solve is triggering functions/methods via network by sending
packages with an ID (determining the function/method) and some data (used
as parameters). The reason why I’m talking about “events” is related to
GUI-events (calling some callbacks with some data). That’s all I tried to
realize [image: Smile]

So sorry for this possibe misanderstanding [image: Smile]

Oh, I see. Basically something like dbus, but over a network socket?

Unfortunately I am not sure about the actual idea of dbus (I know what it’s for but I don’t see the link to my problem here).
If you view the example code on my GitHub repository (direct link to the example code: https://github.com/cgloeckner/netLib/blob/master/example/main2.cpp) you can find an exact example about what I am talking about :slight_smile:

There are two methods onGotNumberSeven and onGotNumberFour which are linked to the event manager structure (see client/server constructor at the same file).

I hope it’ll help understanding my intention :slight_smile:

2014-02-02 Glocke :

Unfortunately I am not sure about the actual idea of dbus (I know what
it’s for but I don’t see the link to my problem here).
If you view the example code on my GitHub repository (direct link to the
example code:
https://github.com/cgloeckner/netLib/blob/master/example/main2.cpp) you
can find an exact example about what I am talking about [image: Smile]

There are two methods onGotNumberSeven and onGotNumberFour which are
linked to the event manager structure (see client/server constructor at the
same file).

I hope it’ll help understanding my intention [image: Smile]

Well, your library seems to implement what is usually called (heh)
“Remote Procedure Calls” (RPC), something that dbus does as well,
although dbus is a more IPCish solution, and mostly over unix sockets,
ie. for inter process communication on the same host machine.

By the way, Qt’s signal/slot system [1] functions similarly to your netlib
in that it assigns each registered slot something like a method id, and
calls based on that. There is also a QxT extension [2] that allows the
emission of signals over network protocols (haven’t used it before though).

[1] http://qt-project.org/doc/qt-4.8/signalsandslots.html
[2] http://libqxt.bitbucket.org/doc/tip/qxtrpcpeer.html

Jonas Kulla wrote:

Well, your library seems to implement what is usually called (heh) “Remote Procedure Calls” (RPC)

Oh right, that’s it :slight_smile: I already knew some JSON-based RPC stuff and that idea inspired me. Some versions ago I was working with a own JSON-implementation, but I cut that off for a binary data exchange.

By the way: The example is now working with SDL2 [Wink] Feel free to test it :slight_smile:

Is there any way to change to topic’s title to “[Framework] Remote Procedure Call” ?

Kind regards
Glocke