So one day I find myself with a directory with more that a million files that need to be cleaned out. A simple rm * is not going to work because of the number of files (argument list too long). First thing I tried is:
find <directory> -name "alarm.*" -mtime +360 -exec rm {} \;
This worked, but takes too long because you have find, xntpd, and rm working sequentially. I also noticed that xntpd is just hogging all the CPU.
Then I tried removing mtime, but it still takes too long. If I try some other filename patterns, find just quits - maybe because there are just too many matches.
Instead of using -exec rm, someone suggested using the -delete argument to make it safer, faster, and more efficient:
find <directory> -type f -delete
Unfortunately, my find doesn't support this argument.
Another suggestion is to do a loop via bash:
bash# for i in *; do
> rm -f $i;
> done
While this works, you're still deleting files one at a time, so very slooooow.
Another trick is to create an empty directory, then rsync it with the directory to be nuked:
rsync -a --delete empty_dir/ full_dir/
Again, unfortunately, my locked-down system doesn't even have rsync, and I'm too lazy to copy it in.
The other way of doing it is to remove the directory totally, then recreate it. One downside is that the new directory might not be exactly the same as the one you deleted, so some applications might not work anymore.
What ultimately worked for me is:
ls -f| xargs rm
By disabling sort, you don't have to load the entire directory listing to memory. The unsorted ls simply streams its output. xargs then passes on as many filename as possible to rm for deletion.
No comments:
Post a Comment