
I have an application I wrote to take a xen core dump and build a windows crash dump from it, and with the same input file, on one server it runs to completion and on the other it reports errors to the read and write functions...
I have md5sum'd the exe and the input file, and both are identical on each server.
Actually it turns out they aren't equal - the original file is sparse but loses its sparseness when copied to the other server.
This is turning out even stranger... original xen core dump file is w2k3test.dump: cp --sparse=auto w2k3test.dump w2k3test2.dump cp w2k3test.dump w2k3test3.dump ls -lsk *.dump 528240 -rw------- 1 root root 528234 Feb 7 22:36 w2k3test.dump 1048576 -rw------- 1 root root 528234 Feb 8 11:12 w2k3test2.dump 528240 -rw------- 1 root root 528234 Feb 8 11:13 w2k3test3.dump How can the file consume 2x as many blocks on disk as its actual file size? Or is xfs mis-reporting things? James