Bernhard Voelker wrote:
On 10/30/18 5:36 PM, Per Jessen wrote:
This might be file system dependent, I'm not sure. I've been doing some tidying up and got stuck on a few directories with millions of files in them. 3+ million per directory. Doing a 'find' takes a very long time and also essentially chokes the system. I ended up writing a small utility using getdents() instead, much faster and the system remains operational.
I was just wondering if e.g. 'find' or 'ls' had some options that would limit the scope ? (not mtime etc).
Both find and ls use gnulib's FTS implementation instead of raw readdir and friends.
Is that relatively new? This is an elderly system, not up-to-date, running openSUSE 11. Another reason for cleaning it up.
You didn't show us the command line you used, but I would guess that you used some options that require an additional stat(). E.g. "ls -l" needs to do an additional stat(). Likewise for coloring output etc. Furthermore, ls defaults to sort the output. Better use the -U option to stick to "directory order". The option -f includes -U.
Yes, Anton also suggested using '-f' yesterday, but my initial test did not show any improvement. (within 5-10min).
For find, it is the same: did you only run 'find'/'find -print' or 'find -ls'? The former 2 should should be quite fast while the latter also requires a stat().
I almost certainly did 'find dir -mtime +365' - this would also require a stat(), but my code shows the stat() is not the problem. -- Per Jessen, Zürich (4.6°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org