
I have an application under Linux that talks to a windows machine. Previously I'd assemble each packet into a big buffer and write it out, eg: write(sock, buffer, length); But a recent change meant I am now writing out the header separately, eg: write(sock, header, header_length); write(sock, buffer, length); The packets are sent separately, but it's TCP and the reassembled stream at the Windows end is identical, but the windows end now gives an error. I changed it to: send(sock, header, header_length, MSG_MORE); write(sock, buffer, length); And it works again (and is better for network efficiency reasons), but it was something I didn't expect and I wasted a bit of time comparing packet traces... Am I right in thinking that this is a bug in the windows application and that it does read() instead of recv(..., MSG_WAITALL) (or whatever the winsock equivalent is), and doesn't handle when the read is less than expected, or is there something I'm not getting? If it matters, the windows application is SQL Server Management Studio. My application translates between TDS and postgres. Thanks James