
On Sun, Mar 09, 2014 at 11:28:09PM +0000, James Harper wrote:
I have an application under Linux that talks to a windows machine. Previously I'd assemble each packet into a big buffer and write it out, eg:
write(sock, buffer, length);
But a recent change meant I am now writing out the header separately, eg:
write(sock, header, header_length); write(sock, buffer, length);
The packets are sent separately, but it's TCP and the reassembled stream at the Windows end is identical, but the windows end now gives an error. I changed it to:
send(sock, header, header_length, MSG_MORE); write(sock, buffer, length);
And it works again (and is better for network efficiency reasons), but it was something I didn't expect and I wasted a bit of time comparing packet traces...
Am I right in thinking that this is a bug in the windows application and that it does read() instead of recv(..., MSG_WAITALL) (or whatever the winsock equivalent is), and doesn't handle when the read is less than expected, or is there something I'm not getting?
If it matters, the windows application is SQL Server Management Studio. My application translates between TDS and postgres.
Thanks
James
I would think it is almost certainly a bug in the Windows application. TCP is a streaming protocol rather than a messaging protocol as UDP is. No application should rely on getting a "message" in a single "packet", though a great many do. A reliable streaming TCP practice is to prepend every message with a with byte count. A simpler method is to have a terminating character or character sequence. TCP messaging applications do neither of these. They just expect to get all they want in a single frame. MSG_MORE and TCP_CORK increase the chances of such an application's success, but are really there to allow the programmer to optimise throughput. Cheers ... Duncan. -- Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html