
On Wed, 26 Oct 2011, Russell Coker wrote:
On Wed, 26 Oct 2011, Erik Christiansen <dvalin@internode.on.net> wrote:
On 26.10.11 22:32, Russell Coker wrote:
To get the space back I ended up commenting out the line from /etc/exports, running "exportfs -r", then umounting and mounting it. I didn't check whether the space was free after running "exportfs -r", it probably was.
Wouldn't it have been enough to unmount & remount? That's long [1] been enough to clear stale NFS file handles for me. (And so I'd expect df to then report OK for you.)
umount/mount on client?
No, umounting on the client didn't work and couldn't work unless the server kept track of which clients had previously used the files in question. It should be possible for the server to do such tracking when the server has enough uptime for client caches to have expired (which would surely happen in less than 2 weeks). But I guess that they don't do such things.
Does NFSv4 fix such things?
I have never come across this issue, and we use nfs *a lot*. Create files on one computer, delete them on a different computer. Never any issue. Are you absolutely sure any client hasn't still got the file open, causing the files to be sillyrename()d to .nfs???????? (as a sysadmin, I never use bare "ls". I'm always interested in dotfiles. Which is why HP-UX's treatment of "-A" is a pox upon something poxy) The files weren't hard linked somewhere else and you didn't notice? Mind you, redhat's second last kernel ( 2.6.18-274.el5 ) has intersting and impossible nfs behaviour. I believe default metadata caching time is 2 seconds and data for 60 seconds (all controlled by the client). When our webserver's nfs servers are updated by rsync, a new temporary filename is created then renamed to overwrite the original file. New inode, new metadata. But just occasionally (a few hours after upgrade, and then 2 months later), some of the webservers keep serving out old copies of a file, and some of them serve out new copies. At best, an old file might be expected to persist for 2 seconds until the metadata is refreshed. If somehow rsync screwed up (I've checked its flags, and no it doesn't overwrite files in place) and rewrote the file in place, the old data might be expected to hang around for a minute. But 10 minutes later, having to manually remove and recopy a file on the nfs server? I've tried doing loops of creation/deletion/rename etc but never reproduced the result, so tracking these down or even declaring it "solved" at some future kernel upgrade is going to be extremely difficult. I haven't even bothered reporting a bug, because where do I start? We've got 1.5 million files on those webservers, and have had 2 detected instances so far of old files being served out. -- Tim Connors