SDLNet_TCP_Send blocking

2009/3/26 Bob Pendleton :

Also, the implication that I think that Moore’s law allows you to write
inefficient code is just plain insulting. Moore’s law changes computing so
that what is efficient at one turn of the crank is no longer effiecient at
the next turn of the crank. Because current chip designs have pretty well
wrung all the performance they can from super scalar processors Moore’s law
gives us more cores at each turn of the crank instead of deeper pipelines
and hyperthreading. That means that threads are now the effiecient way to
get things done.

I, for one, welcome our shared cache overlords!–
http://pphaneuf.livejournal.com/

2009/3/26 Bob Pendleton :

When the game is over, why would you bother to join the threads? When you
are done you are done. Just die.

What can I say, I’m a neat freak sometimes.–
http://pphaneuf.livejournal.com/

I cannot find any way around this problem. Even if I was to use
threads,

this would result in a thread blocking for ages, which isn’t
really a good

idea.

Why not?

When you want to quit the game, a send() is pending, and you join your
thread, that will finish when exactly? Possibly once the TCP
connection times out, after a few minutes. Eurgh.

When the game is over, why would you bother to join the threads?
When you are done you are done. Just die.

There are other cases where you perhaps want to join it. (Perhaps rare
cases, depending on the layout of the rest of the game/application but
always when I hit such a problem, it more seems like a workaround than
a good solution to just forgot about this thread and create a new one.)Am 26.03.2009 um 21:17 schrieb Bob Pendleton:

On Thu, Mar 26, 2009 at 2:55 PM, Pierre Phaneuf wrote:
On Thu, Mar 26, 2009 at 3:33 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote:

Just last week I went through the exercise of looking up the original definition of Moore’s law and then used published
data on the number of transistors per processor chip in Intel processors covering the time period from 1971 to 2008
and I can tell you that Moore’s law is still very healthy and still cranking along. (I’m writing an article on the emergence
of the giga-age of computing.) The number of transistors on a processor chip has increased at the average rate of
sqrt(2) per year.

The fact that you think that Moore’s law has something to do with cycle times indicates to me that you do not understand
Moore’s law. That doesn’t surprise me, I had it wrong too and I have written other papers on the subject. Moore’s law is
only about the number of transistors on chip, not about the speed of those transistors.

Yes, I’m well aware of that. I was using it in the commonly-understood sense, of increasing processor speeds and RAM
availability. The stuff that the article calls “the free lunch”. The article covers exactly what you mentioned, in fact: that
transistor count has increased even though performance pretty much leveled off after 2003.

Also, the implication that I think that Moore’s law allows you to write inefficient code is just plain insulting. Moore’s law
changes computing so that what is efficient at one turn of the crank is no longer effiecient at the next turn of the crank.

That’s a very odd interpretation, and one which I’ve never heard anywhere else. Possibly because it’s just plain not true
and never has been. The availability of new computing power does not take something that used to work efficiently and
make it perform poorly now; if it does, that means someone at the CPU manufacturer screwed up badly.

Unfortunately, that effect often works in reverse. Something that used to be horribly slow in the past is now faster
because there’s more hardware to throw at it, and suddenly everyone thinks they can get away with doing it that way.
Unfortunately, much like college professors who all tend to assign an entire weekend’s worth of homework to their students
when Friday comes around, there’s an unfortunate tendency for programmers to overestimate the benefit they get from
their new hardware and also to arrogantly assume that they get it all to themselves. When you’re running on a multitasking
system, (which is pretty much all of them these days), you’ll often see the OS guys do this, and the driver guys, and the
people who write background processes/services, and the ones who write the applications that the user cares about, and
before you know it, you’ve got Windows Vista.

This is the primary reason why today’s computers, despite having thousands of times more raw hardware available,
actually execute fundamental real-world tasks such as booting up and launching programs much, much slower
than computers of twenty years ago did. Problem is, people are still writing that way, even though that kind of thinking
hasn’t been valid for at least six years now. (Anyone want a nice, hot cup of Java?)

So if you were using “Moore’s Law” in the technical sense instead of the colloquial sense, that’s good. If you’re one of the
people who gets it, I’m very glad you’re here. There’s not too many people left that do these days. I’m just trying to do
what little I can to stem the tide.>From: Bob Pendleton

Subject: Re: [SDL] SDLNet_TCP_Send blocking

Donny Viszneki wrote:

There’s a patch here in April 2004 for version 1.2.6 to add nonblocking, but
it doesn’t seem to be in version 1.2.7.
http://www.devolution.com/pipermail/sdl/2004-April/061335.html

Then use that! You can use your own SDLNet libs, you don’t need to
use the official ones.

But that would require me to distribute a forked version of SDLNet with
my project, and get the compiler to statically link it. I really don’t
want to go there.

Gregory Smith wrote:

We assume SDL_Net’s internal structures haven’t changed, and go in
behind its back and make the sockets non-blocking using
platform-specific calls :C

I appear to have adopted this approach, based on some existing game code
I found on the net. It’s a horrible hack, and only works in Linux. Do
you have a less hacky, more portable version I can look at please?

I find it somewhat strange that the majority response seems to be “we
don’t use SDLNet”. Oh well.

Thanks for the replies everyone.
Simon

Gregory Smith wrote:

We assume SDL_Net’s internal structures haven’t changed, and go in
behind its back and make the sockets non-blocking using
platform-specific calls :C

I appear to have adopted this approach, based on some existing game code I
found on the net. It’s a horrible hack, and only works in Linux. Do you have
a less hacky, more portable version I can look at please?

Oh, goodness, no. I can almost guarantee it is no less hacky or more
portable than yours. Thus the emoticon.

void MakeTCPsocketNonBlocking(TCPsocket *socket) {
// SET NONBLOCKING MODE
// XXX: this depends on intimate carnal knowledge of the SDL_net struct _UDPsocket
// if it changes that structure, we are hosed.

int fd = ((int *) (*socket))[1];
#if defined(WIN32)
u_long val = 1;
ioctlsocket(fd, FIONBIO, &val);
#elif defined(MACOS)
OTSetNonBlocking((TProvider ) fd);
#else
#ifdef MWERKS
/
out of /usr/include/sys/fcntl.h - mwerks doesn’t have these defined */
#define F_SETFL 4
#define O_NONBLOCK 0x0004
#endif

fcntl(fd, F_SETFL, O_NONBLOCK);

#endif
}

I find it somewhat strange that the majority response seems to be “we don’t
use SDLNet”. Oh well.

I would like to get away from using it, myself. Alternatives are:

  • boost, the non-template parts of which are an absolute nightmare to get
    working with the mingw cross compiler

  • enet, which looks nice to the extent I’ve investigated it, which isn’t
    much (we are the client and the server, so TCP is not essential)

  • just write a non-blocking SDL_Net workalike and include it in the
    source. It’s really only one file, after all, and half of that is broken
    OpenTransport code.

GregoryOn Thu, 26 Mar 2009, Simon Williams wrote:

I was thinking that if they were using TCP, they were doing something
like downloading game maps or something so they could join the actual
game. In which case why not let a thread block while you ask the
kernel to send the entire map? Seems like the simplest way to code it
to me. But you said something earlier about possibly not being able to
kill the thread (well, you said you join the thread and then wait
forever while the TCP connection times out.) OP is concerned that
having one single thread to do ALL networking is incompatible with
blocking calls to send()/write() and he’s right, but only if he must
use a single thread for all networking.

Is reliance on push-back as a signal to back off really a viable
alternative to using UDP and just letting the network drop packets
when congestion impedes delivery? I’d love to see some literature on
push-back!On Thu, Mar 26, 2009 at 3:55 PM, Pierre Phaneuf wrote:

On Thu, Mar 26, 2009 at 3:33 PM, Donny Viszneki <@Donny_Viszneki> wrote:

I would say sending large amounts of data is a definite exception to
the “threads are unnecessary complication to game networking” rule.

I’d say that with game networking, you often want to do proper flow
control to keep latencies low. What I mean by this is that you
probably want to use the sockets in non-blocking mode anyway, so that
you know when there’s “push back” and that the network link can’t keep
up, and write less often, so that you avoid queueing a lot (if you
consider it sent, but it’s actually in a queue somewhere, it’s not
actually sent, and that’s no good for latency (lag!). One might
actually want to shrink the kernel socket buffers, in some
situations…


http://codebad.com/

Heyya, I use SDL_net. It works great for what it does. I do wish it were more robust. It took me some real time to figure out how to engineer it’s use so that hackers wont have my ass on the release date of my sdl product (I will yell and scream about it until the swat team shows up, so you’ll know about my sdl product). It should take a hacker over a day to hack me now! lol.------------------------------------------

---- Gregory Smith wrote:

=============
On Thu, 26 Mar 2009, Simon Williams wrote:

I find it somewhat strange that the majority response seems to be “we don’t
use SDLNet”. Oh well.

I would like to get away from using it, myself. Alternatives are:

  • boost, the non-template parts of which are an absolute nightmare to get
    working with the mingw cross compiler

  • enet, which looks nice to the extent I’ve investigated it, which isn’t
    much (we are the client and the server, so TCP is not essential)

  • just write a non-blocking SDL_Net workalike and include it in the
    source. It’s really only one file, after all, and half of that is broken
    OpenTransport code.

Gregory


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Are you saying there’s a critical security vulnerability in SDL_net?On Thu, Mar 26, 2009 at 7:23 PM, wrote:

Heyya, I use SDL_net. It works great for what it does. I do wish it
were more robust. It took me some real time to figure out how to
engineer it’s use so that hackers wont have my ass on the release date
of my sdl product (I will yell and scream about it until the swat team
shows up, so you’ll know about my sdl product). It should take a hacker
over a day to hack me now! lol.


http://codebad.com/

Is reliance on push-back as a signal to back off really a viable
alternative to using UDP and just letting the network drop packets
when congestion impedes delivery? I’d love to see some literature on
push-back!

With UDP, it’s a bit more limited, because it doesn’t reflect what’s
happening on the network, when send() blocks, it just means that
you’re overwhelming the kernel (which probably means you’re not going
to do too well on the network, though).

With TCP, there’s ACKing and the window, so it’s a very significant
signal (it’ll let you know if anything is overwhelmed, be it the
kernel, the network, or even the program at the other end!). It’s not
an alternative to UDP, but for other reasons (for example, one of the
latency improving strategy with UDP is that if you miss a packet you
can do without, you can just keep going, while TCP will "fix"
out-of-order delivery by not giving you a newer packet until the
missed packet has been received correctly). If you want to read more,
what you’re looking for is “flow control”:

It’s also why I’m annoyed by the current design of the SDL event
system, where it tries to suck as many events as possible out of the
lower level, because that causes these lower levels to think that
you’re actually responsive and on the ball, while in reality the
application might be swamped, the SDL event queue full, and just
furiously throwing out events just to keep up… I’m not informed of
all the different platforms, but at least with X11, this flow control
does make a difference.

(speaking of SDL event system, I’m still working on my patch, it’s
just that I’ve been loaded with work this quarter, so it’s been put on
hold for a bit)On Thu, Mar 26, 2009 at 6:19 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote:


http://pphaneuf.livejournal.com/

Just last week I went through the exercise of looking up the original
definition of Moore’s law and then used published
data on the number of transistors per processor chip in Intel processors
covering the time period from 1971 to 2008
and I can tell you that Moore’s law is still very healthy and still
cranking along. (I’m writing an article on the emergence
of the giga-age of computing.) The number of transistors on a processor
chip has increased at the average rate of
sqrt(2) per year.

The fact that you think that Moore’s law has something to do with cycle
times indicates to me that you do not understand
Moore’s law. That doesn’t surprise me, I had it wrong too and I have
written other papers on the subject. Moore’s law is
only about the number of transistors on chip, not about the speed of
those transistors.

Yes, I’m well aware of that. I was using it in the commonly-understood
sense, of increasing processor speeds and RAM
availability. The stuff that the article calls “the free lunch”. The
article covers exactly what you mentioned, in fact: that
transistor count has increased even though performance pretty much leveled
off after 2003.

Also, the implication that I think that Moore’s law allows you to write
inefficient code is just plain insulting. Moore’s law
changes computing so that what is efficient at one turn of the crank is no
longer effiecient at the next turn of the crank.

That’s a very odd interpretation, and one which I’ve never heard anywhere
else. Possibly because it’s just plain not true
and never has been. The availability of new computing power does not take
something that used to work efficiently and
make it perform poorly now; if it does, that means someone at the CPU
manufacturer screwed up badly.

You and I have got to argue more often, this is fun.

Yes, it is an odd interpretation, but then I am known for having an odd
point of view. Take the example of fixed point arithmetic. It used to be the
most efficient way to do computations for 2D and 3D graphics. Integer
arithmetic used to be by far the fastest way to do any kind of numeric
computation. A couple of turns of Moore’s crank and floating point
arithmetic becomes very much faster than integer arithmetic and fixed point
graphics pipelines went the way of the dinosaur. Another turn of the crank
and consumer CPUs got pipelined floating point vector operations and what
was a a few years ago a long sequence of integer and shift becomes one
vector instruction.

There are a couple of examples of Moore’s law making something that was the
most efficient way to do something the least efficient way to do something.

Unfortunately, that effect often works in reverse. Something that used to
be horribly slow in the past is now faster
because there’s more hardware to throw at it, and suddenly everyone thinks
they can get away with doing it that way.
Unfortunately, much like college professors

Don’t get me started on computer science college professors. I think we have
an equally low opinion of the current crop of computer science college
professors. Unforetunately I am a college instructor and I understand why
they do the stupid things they do. I don’t excuse them, but I do understand.

who all tend to assign an entire weekend’s worth of homework to their
students
when Friday comes around, there’s an unfortunate tendency for programmers
to overestimate the benefit they get from
their new hardware and also to arrogantly assume that they get it all to
themselves. When you’re running on a multitasking
system, (which is pretty much all of them these days), you’ll often see the
OS guys do this, and the driver guys, and the
people who write background processes/services, and the ones who write the
applications that the user cares about, and
before you know it, you’ve got Windows Vista.

Yep… and Windows XP before that and … and … Ever try to install Unix
on a 386?

I think you have the effect right, but I think you miss one important fact
that is the real culprit. Hardware has a life cycle measured in months to
years. Software has a life cycle measured in decades. The core assumptions
that go into a piece of software are like rails that force it down a fixed
path for a very long time. To change it you have to build a new set of
rails, i.e. rethink the whole thing. Hardware gets rethought constantly.
Hardware gets redesigned, sold, and forgotten, in over very short periods of
time.

Hardware is dramatically less complex than software.

This is the primary reason why today’s computers, despite having thousands
of times more raw hardware available,
actually execute fundamental real-world tasks such as booting up and
launching programs much, much slower
than computers of twenty years ago did.

Let me restate that, ok? The transfer rate from disk to RAM has not
increassed at anywhere near the rate of expansion of the size of OSes and
applications. Or to put it another way, if the speed limit is doubled but
the number of trucks has tripled you don’t get the load through as fast as
you used to.

Yep. That has happened. But, also the boot loader does a lot more than it
used to.

Problem is, people are still writing that way, even though that kind of
thinking
hasn’t been valid for at least six years now. (Anyone want a nice, hot cup
of Java?)

Actually, Java is one of the few languages out there that has built in
mulitprocessing. Some Java applications see a real improvement in
performance going from a single processor to a dual processor.

But, I get your point.

So if you were using “Moore’s Law” in the technical sense instead of the
colloquial sense, that’s good.

Well… I was to the best of my ability. BTW, when did Moore’s law start?

If you’re one of the
people who gets it, I’m very glad you’re here. There’s not too many people
left that do these days. I’m just trying to do
what little I can to stem the tide.

That’s OK by me. I’ve been programming since the early '70s I have some
accumulated knowledge.

Bob PendletonOn Thu, Mar 26, 2009 at 3:46 PM, Mason Wheeler wrote:

From: Bob Pendleton <@Bob_Pendleton>
Subject: Re: [SDL] SDLNet_TCP_Send blocking


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

I think he’s saying that his skills with low-level networking (SDL_net
opposed to something that does everything for you) allow such room for
hacking.

Jonny DOn Thu, Mar 26, 2009 at 10:08 PM, Donny Viszneki <donny.viszneki at gmail.com>wrote:

On Thu, Mar 26, 2009 at 7:23 PM, wrote:

Heyya, I use SDL_net. It works great for what it does. I do wish it
were more robust. It took me some real time to figure out how to
engineer it’s use so that hackers wont have my ass on the release date
of my sdl product (I will yell and scream about it until the swat team
shows up, so you’ll know about my sdl product). It should take a hacker
over a day to hack me now! lol.

Are you saying there’s a critical security vulnerability in SDL_net?


http://codebad.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2009/3/26 Bob Pendleton

The answer is that some platforms do not support non-blocking writes to a
socket. SDL has always gone with the lowest common denominator.

Really, this isn’t completely true but rather a general rule of thumb, so
why can’t this problem be changed in order to give the poor OP renewed faith
in SDL_net? Can an option be added to enable non-blocking writes where it
is supported?

Jonny D

2009/3/27 Jonathan Dearborn :

2009/3/26 Bob Pendleton

The answer is that some platforms do not support non-blocking writes to a
socket. SDL has always gone with the lowest common denominator.

Really, this isn’t completely true but rather a general rule of thumb, so
why can’t this problem be changed in order to give the poor OP renewed faith
in SDL_net?? Can an option be added to enable non-blocking writes where it
is supported?

That can be done, but in the meantime, until others and distro
maintainers and end-users all upgrade their SDL_net libs, OP will
still have to do what I suggested originally, and retain his own
modified copy of SDL_net.–
http://codebad.com/

Is reliance on push-back as a signal to back off really a viable
alternative to using UDP and just letting the network drop packets
when congestion impedes delivery? I’d love to see some literature on
push-back!

With UDP, it’s a bit more limited, because it doesn’t reflect what’s
happening on the network, when send() blocks, it just means that
you’re overwhelming the kernel (which probably means you’re not going
to do too well on the network, though).

I think you meant to say “if send() blocks, it just means you’re crazy
and sending waaaaaay too much data” :stuck_out_tongue:

With TCP, there’s ACKing and the window, so it’s a very significant
signal (it’ll let you know if anything is overwhelmed, be it the
kernel, the network, or even the program at the other end!). It’s not
an alternative to UDP, but for other reasons (for example, one of the
latency improving strategy with UDP is that if you miss a packet you
can do without, you can just keep going, while TCP will "fix"
out-of-order delivery by not giving you a newer packet until the
missed packet has been received correctly). If you want to read more,
what you’re looking for is “flow control”:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control

I read this:

“When a receiver advertises a window size of 0, the sender stops
sending data and starts the persist timer. The persist timer is used
to protect TCP from a deadlock situation that could arise if the
window size update from the receiver is lost and the sender has no
more data to send while the receiver is waiting for the new window
size update”

I thought it was going to end “while the sender is waiting for the new
window size update.”

Another way TCP is probably not appropriate for many games: It would
seem that most of the care TCP takes to keep the connection healthy is
completely unnecessary for games. Either you have the throughput (both
network and processing power) to play the game, or you have less! Most
game developers I’m familiar with wouldn’t trouble themselves with
network overhead dedicated to players that are too slow to keep up! In
q3a for instance, if the server or client pegs the CPU, you just get
rubberbanding or warping.

It’s also why I’m annoyed by the current design of the SDL event
system, where it tries to suck as ----------

We cannot let that discussion escape from the confines of its
single, epic thread. If that happens, all hope for humanity is lost!

(speaking of SDL event system, I’m still working on my patch, it’s
just that I’ve been loaded with work this quarter, so it’s been put on
hold for a bit)

:)On Thu, Mar 26, 2009 at 11:13 PM, Pierre Phaneuf wrote:

On Thu, Mar 26, 2009 at 6:19 PM, Donny Viszneki <@Donny_Viszneki> wrote:


http://codebad.com/

2009/3/27 Jonathan Dearborn

2009/3/26 Bob Pendleton <@Bob_Pendleton>

The answer is that some platforms do not support non-blocking writes to a
socket. SDL has always gone with the lowest common denominator.

Really, this isn’t completely true but rather a general rule of thumb, so
why can’t this problem be changed in order to give the poor OP renewed faith
in SDL_net? Can an option be added to enable non-blocking writes where it
is supported?

I have talked to Sam about that at length, he has always been willing to add
a patch to fix this problem so long as the patch worked on Windows, Mac, and
Linux. (OK, that is what I remember if I am wrong please forgive me.) I have
offered to fix it on Linux, but I don’t have a Windows development system
and I have never done much with Mac.

The last time this came up I asked for people to volunteer to create patches
for the other two platforms. Asking people to put up, caused them to shut
up. Everyone was ready and willing to complain, not one, not a single other
person on the list, was willing to contribute the code to fix the problem on
Windows or the Mac.

But, they did shut up and stop complaining.

So, put up or shut up folks. If you want SDLNet to have another option then
contribute the code.

Bob Pendleton>

Jonny D


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

I have talked to Sam about that at length, he has always been willing to add
a patch to fix this problem so long as the patch worked on Windows, Mac, and
Linux. (OK, that is what I remember if I am wrong please forgive me.) I have
offered to fix it on Linux, but I don’t have a Windows development system
and I have never done much with Mac.

Mac OS X uses BSD sockets like Linux, so a fix for the latter is a fix for
the former. If you want to fix the BSD sockets, I can do the WSA
side–it’s a couple (three?) socket calls.

To be honest, I wasn’t even aware SDL_net was still maintained.

GregoryOn Fri, 27 Mar 2009, Bob Pendleton wrote:

I have talked to Sam about that at length, he has always been willing to

add
a patch to fix this problem so long as the patch worked on Windows, Mac,
and
Linux. (OK, that is what I remember if I am wrong please forgive me.) I
have
offered to fix it on Linux, but I don’t have a Windows development system
and I have never done much with Mac.

Mac OS X uses BSD sockets like Linux, so a fix for the latter is a fix for
the former. If you want to fix the BSD sockets, I can do the WSA side–it’s
a couple (three?) socket calls.

To be honest, I wasn’t even aware SDL_net was still maintained.

Have you taken a look at the code?

Bob PendletonOn Fri, Mar 27, 2009 at 1:33 PM, Gregory Smith wrote:

On Fri, 27 Mar 2009, Bob Pendleton wrote:

Gregory


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Have you taken a look at the code?

A long time ago, trying (unsuccessfully) to get the OpenTransport stuff to
work. As I recall, the non-OT parts were a few hundred lines at the end of
the file.

GregoryOn Fri, 27 Mar 2009, Bob Pendleton wrote:

Bob Pendleton writes on Thu, 26 Mar 2009 14:48:15 EST
> --===============5396831807288551682==
> Content-Type: multipart/alternative; boundary=000e0cd229d8a39e2604660ae5
d2
>
> --000e0cd229d8a39e2604660ae5d2
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: 7bit
>
>
> > > On Thu, Mar 26, 2009 at 2:16 PM, Simon Williams
> > >> 3. People using SDLNet- how do you get around this?
> > >
> > > Probably sockets.
> >
> > err… threads… probably threads
> >
> >
> Asynchronous IO from Boost.org. If you do not know about Boost.org, go
there
> and be happy.
>
> Bob Pendleton

I have my doubts about boost – its seems painful to debug.
I worked with a package of 10k LOC C++ which didn’t work and transformed it
into 1k LOC of C (which worked on mingw, MAC and linux).

I looked at SDL_net and concluded it wasn’t that useful…

This is what I ended up doing:

ifdef WIN32
#include <winsock2.h>
#define write_socket(fd, buffer, size) send(fd, buffer, size, 0)
#define read_socket(fd, buffer, size) recv(fd, buffer, size, 0)
#else
#define write_socket write
#define read_socket read
#define closesocket close
#define INVALID_SOCKET -1
#endif

marty> On Thu, Mar 26, 2009 at 2:24 PM, Donny Viszneki <donny.viszneki at gmail.com>wrote:
> > On Thu, Mar 26, 2009 at 3:21 PM, Donny Viszneki <donny.viszneki at gmail.com> wrote: