SDL_Net and non-blocking I/O

It seems to me that there is no way to either call SDLNet_TCP_Send() in a non-blocking way, or check if a call to SDLNet_TCP_Send() would block.

This means that there is no way that I can see to write a program that uses SDL_Net without either using threads, or living with your program blocking indefinitely sometimes in a send.

Am I missing something? Is there an alternative? It seems to me that it would be fairly trivial to write an equivalent to SDLNet_CheckSockets() to see if any sockets are ready to be written to.

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)

  1. In win32 if I run fullscreen, the windows cursor dissapears,
    while windowed apps keep the cursor. Does this act the same in
    other OS’s? Or Should I be using SDL_ShowCursor() anyway?

  2. If I have a bitmap, should I use SDL_CreateCursor() and
    SDL_FreeCursor()? Or is it okay to just blit a sprite where the
    cursor is?__________________________________
    Do you Yahoo!?
    New and Improved Yahoo! Mail - Send 10MB messages!
    http://promotions.yahoo.com/new_mail

At 12:30 PM 6/25/2004, you wrote:

It seems to me that there is no way to either call SDLNet_TCP_Send() in
a non-blocking way, or check if a call to SDLNet_TCP_Send() would block.
This means that there is no way that I can see to write a program that
uses SDL_Net without either using threads, or living with your program
blocking indefinitely sometimes in a send.
Am I missing something? Is there an alternative? It seems to me that it
would be fairly trivial to write an equivalent to SDLNet_CheckSockets()
to see if any sockets are ready to be written to.

Someone wrote a non-blocking patch for sdl_net. I’m not sure where the
best place to find it is.

Before going on, I’d just like to note that my
knowledge comes from Windows and POSIX TCP/IP
networking, and I have never used SDL_Net. (And for
that matter, I am curious how much runtime overhead is
actually incurred using SDL_Net, it is possible in my
mind for there to be none at all.)

— David White wrote:

It seems to me that there is no way to either call
SDLNet_TCP_Send() in a non-blocking way, or check if
a call to SDLNet_TCP_Send() would block.

If you’re sending a significant quantity of data such
that blocking becomes a problem, it’s fairly trivial
to create an asynchronous data transmission layer for
your application to minimize its impact on your
application by breaking up your data into smaller
pieces and sending them one at a time. Threading is
probably a more viable option.

This means that there is no way that I can see to
write a program that uses SDL_Net without either
using threads, or living with your program blocking
indefinitely sometimes in a send.

Like I said above, an asynchronous transfer layer is
pretty trivial to write.

Am I missing something? Is there an alternative? It
seems to me that it would be fairly trivial to write
an equivalent to SDLNet_CheckSockets() to see if any
sockets are ready to be written to.

I’m not sure how heavily encapsulated SDL_Net is, but
might I suggest using the select() system call? How
hard is it to retrieve a socket descriptor in SDL_Net?> David White

Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)>


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

jake b wrote:
| 1) In win32 if I run fullscreen, the windows cursor dissapears,
| while windowed apps keep the cursor. Does this act the same in
| other OS’s? Or Should I be using SDL_ShowCursor() anyway?
|
| 2) If I have a bitmap, should I use SDL_CreateCursor() and
| SDL_FreeCursor()? Or is it okay to just blit a sprite where the
| cursor is?

  1. I have never had this trouble on any OS. Have you actually tried
    SDL_ShowCursor()? Does that work?
  2. You could just blit a sprite, but it would probably be better to use
    the SDL cursor functions.

Chris E.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFA3TF4RgD2xPOqY+URArmsAKDGuLQJVwDBZjFZO+36zyJ0F+T1CgCglVpP
cwLrNgjEvYeebYSMObYCOTc=
=ExkT
-----END PGP SIGNATURE-----

i do #2, works great for linux/windows

i noticed the same thing as you’re saying in #1 on windows, never cared
enough to do anything about it or test it for linux, but alteast you can
know you’re not alone in that :P> ----- Original Message -----

From: ninmonkeys@yahoo.com (jake b)
To:
Sent: Friday, June 25, 2004 2:57 PM
Subject: [SDL] cursor questions

  1. In win32 if I run fullscreen, the windows cursor dissapears,
    while windowed apps keep the cursor. Does this act the same in
    other OS’s? Or Should I be using SDL_ShowCursor() anyway?

  2. If I have a bitmap, should I use SDL_CreateCursor() and
    SDL_FreeCursor()? Or is it okay to just blit a sprite where the
    cursor is?


Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

  1. In win32 if I run fullscreen, the windows
    cursor dissapears,
    while windowed apps keep the cursor. Does this act
    the same in
    other OS’s? Or Should I be using SDL_ShowCursor()
    anyway?

— Alan Wolfe wrote:

i noticed the same thing as you’re saying in #1 on
windows, never cared

Many systems don’t actually give you full video
hardware priveleges in windowed mode, otherwise you’d
be rendering all over other screens. In most systems
this means you’re rendering occurs on a seperate
surface which eventually gets included in the final
screen image by the window composition element of your
system. The mouse cursor is drawn by the window
composition system.

Whenever you are in full screen, the window
composition system no longer needs to do anything, so
it relinquishes full video control to your
application. During this time, the window composition
system usually ceases to draw any sort of cursor for
you.

Hope that helps.> ----- Original Message -----

From: “jake b”


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http://promotions.yahoo.com/new_mail

If you’re sending a significant quantity of data such
that blocking becomes a problem, it’s fairly trivial
to create an asynchronous data transmission layer for
your application to minimize its impact on your
application by breaking up your data into smaller
pieces and sending them one at a time. Threading is
probably a more viable option.

Well, the application (the Wesnoth server) doesn’t usually send large
amounts of data. The problem is that occasionally, I believe when
sending to very slow clients, the ‘send’ call blocks for a long period
of time.

As far as I can see, this is an intrinsic problem that could be reduced
by adding another layer, but not removed.

I would prefer not to write a multi-threaded program, since threads have
a way of being…complicated. It’d likely be easier to just drop
SDL_Net and use another networking library or the underlying system
calls than to use threads.

I’m not sure how heavily encapsulated SDL_Net is, but
might I suggest using the select() system call? How
hard is it to retrieve a socket descriptor in SDL_Net?

I don’t believe there is any way of getting the socket descriptor in
SDL_Net [1]

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)

[1] Well, not besides circumventing the system the SDL libraries use to
make their data structures opaque.

— David White wrote:

Well, the application (the Wesnoth server) doesn’t
usually send large
amounts of data. The problem is that occasionally,
I believe when
sending to very slow clients, the ‘send’ call blocks
for a long period
of time.

What kind of network is this on? To my knowledge, a
well switched network should buffer any trivial amount
of data queued for deliver to a client machine. I’m
kind of confused about what you’re saying. Does your
program wait for some kind of receipt?__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

Donny Viszneki wrote:

— David White wrote:

Well, the application (the Wesnoth server) doesn’t
usually send large
amounts of data. The problem is that occasionally,
I believe when
sending to very slow clients, the ‘send’ call blocks
for a long period
of time.

What kind of network is this on? To my knowledge, a
well switched network should buffer any trivial amount
of data queued for deliver to a client machine. I’m
kind of confused about what you’re saying. Does your
program wait for some kind of receipt?

On a TCP/IP network. Usually the Internet. Could be a LAN, or any type
of TCP/IP network though.

The point is, regardless of what ‘most’ implementations will do, the
behavior of send(2), which is called by SDL_Net on most implementations
is to block for an indefinite time if the socket can’t be written to.
Sometimes the program does indeed block for a while inside send(2). One
would think that SDL_Net should allow a single-threaded program to be
written that won’t block for indefinite periods of time.

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)

The point is, regardless of what ‘most’ implementations will do, the
behavior of send(2), which is called by SDL_Net on most implementations is
to block for an indefinite time if the socket can’t be written to.
Sometimes the program does indeed block for a while inside send(2). One
would think that SDL_Net should allow a single-threaded program to be
written that won’t block for indefinite periods of time.

Hi… someone already wrote a patch for async sdl_net. Its all a matter of
finding it.

Donny Viszneki wrote:

— David White wrote:

Well, the application (the Wesnoth server) doesn’t
usually send large
amounts of data. The problem is that occasionally,
I believe when
sending to very slow clients, the ‘send’ call blocks
for a long period
of time.

What kind of network is this on? To my knowledge, a
well switched network should buffer any trivial amount
of data queued for deliver to a client machine. I’m
kind of confused about what you’re saying. Does your
program wait for some kind of receipt?

On a TCP/IP network. Usually the Internet. Could be a LAN, or any type
of TCP/IP network though.

The point is, regardless of what ‘most’ implementations will do, the
behavior of send(2), which is called by SDL_Net on most implementations
is to block for an indefinite time if the socket can’t be written to.
Sometimes the program does indeed block for a while inside send(2). One
would think that SDL_Net should allow a single-threaded program to be
written that won’t block for indefinite periods of time.

You have mentioned that the blocking occurs because of a slow client.
Have you verified that? Is it possible that the slow down is the result
of buffer exhaustion in the OS or network traffic at another location in
the network? Have you seen this problem occur on a LAN? Or only on the
Internet?

If you had a non-blocking send() how would you use it? I assume you
would look for the ewouldblock/eagain messages and queue the output
until you finally get to send the message? If so, how long do you hold
it in the queue and how much output are you willing to queue before you
declare the client too slow to play and disconnect it?

	Bob PendletonOn Sun, 2004-06-27 at 21:57, David White wrote:

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Bob Pendleton wrote:

You have mentioned that the blocking occurs because of a slow client.
Have you verified that? Is it possible that the slow down is the result
of buffer exhaustion in the OS or network traffic at another location in
the network? Have you seen this problem occur on a LAN? Or only on the
Internet?

I really don’t know the exact cause of the blocking. But I don’t see why
exactly it’d be necessary to work this out. The documentation for send()
says that it can block indefinitely. I don’t see how anyone can see it
as acceptable for their program to have the documented potential to
block indefinitely, whatever the cause.

If you had a non-blocking send()

Actually I’d prefer to have access to a select() wrapper that allows me
to determine if sockets are ready to write to without blocking.
Non-blocking send() would work too though.

how would you use it? I assume you
would look for the ewouldblock/eagain messages and queue the output
until you finally get to send the message? If so, how long do you hold
it in the queue and how much output are you willing to queue before you
declare the client too slow to play and disconnect it?

You’d set a limit, and disconnect the client once the limit is reached.
There’d be various methods of setting a limit, the most obvious being an
amount chosen after testing, and probably configurable using command
line parameters.

Even having no limit, and queuing data indefinitely until the client can
take it would be much better than having the potential for a connection
to block indefinitely and lock up the server.

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)

Bob Pendleton wrote:

You have mentioned that the blocking occurs because of a slow client.
Have you verified that? Is it possible that the slow down is the result
of buffer exhaustion in the OS or network traffic at another location in
the network? Have you seen this problem occur on a LAN? Or only on the
Internet?

I really don’t know the exact cause of the blocking. But I don’t see why
exactly it’d be necessary to work this out. The documentation for send()
says that it can block indefinitely. I don’t see how anyone can see it
as acceptable for their program to have the documented potential to
block indefinitely, whatever the cause.

If you had a non-blocking send()

Actually I’d prefer to have access to a select() wrapper that allows me
to determine if sockets are ready to write to without blocking.
Non-blocking send() would work too though.

I was under the impression that it was not, in general, possible to
implement a select() that could guarantee that a send will not block.
The conditions that the select() calls at the time you call it are not
guaranteed to be the conditions that exist when you call send(). Nor,
can the select() call know how much data you are going to send() or what
the state of the network will be at the time of the send().

It sounds like you really need a non-blocking send(). You could
implement this with threads, but you would need to use a thread pool and
allocate a thread per pending send() to ensure that the server never
blocks. Of course, if it was a network problem then, then in some cases
the library would have to block because it couldn’t allocate enough
threads to handle all the pending send() calls.

I admit that this problem is something I punted in my net2 library. My
testing never showed send() calls blocking for long periods of time so I
didn’t worry about it.

Thanks for the information, I may have to rewrite parts of my library.

	Bob PendletonOn Thu, 2004-07-01 at 10:25, David White wrote:

how would you use it? I assume you
would look for the ewouldblock/eagain messages and queue the output
until you finally get to send the message? If so, how long do you hold
it in the queue and how much output are you willing to queue before you
declare the client too slow to play and disconnect it?

You’d set a limit, and disconnect the client once the limit is reached.
There’d be various methods of setting a limit, the most obvious being an
amount chosen after testing, and probably configurable using command
line parameters.

Even having no limit, and queuing data indefinitely until the client can
take it would be much better than having the potential for a connection
to block indefinitely and lock up the server.

David White
Lead Developer
Battle for Wesnoth (http://www.wesnoth.org)


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

I was under the impression that it was not, in general, possible to
implement a select() that could guarantee that a send will not block.
The conditions that the select() calls at the time you call it are not
guaranteed to be the conditions that exist when you call send(). Nor,
can the select() call know how much data you are going to send() or what
the state of the network will be at the time of the send().

send() returns the number of bytes it was able to send. And select()
only checks that the socket’s buffer has enough room. It’s another story
when the socket’s buffer gets sent to the actual network.–
Petri Latvala

  1. In win32 if I run fullscreen, the windows cursor dissapears,
    while windowed apps keep the cursor. Does this act the same in
    other OS’s? Or Should I be using SDL_ShowCursor() anyway?

Yes, this happens if you’re writing directly to video memory:
http://www.libsdl.org/faq.php?action=listentries&category=2#83

It’s okay to create a sprite and blit it where the cursor is.

See ya!
-Sam Lantinga, Software Engineer, Blizzard Entertainment