On 10/30/18 5:36 PM, Per Jessen wrote:
This might be file system dependent, I'm not sure. I've been doing some tidying up and got stuck on a few directories with millions of files in them. 3+ million per directory. Doing a 'find' takes a very long time and also essentially chokes the system. I ended up writing a small utility using getdents() instead, much faster and the system remains operational.
I was just wondering if e.g. 'find' or 'ls' had some options that would limit the scope ? (not mtime etc).
Both find and ls use gnulib's FTS implementation instead of raw readdir and friends. You didn't show us the command line you used, but I would guess that you used some options that require an additional stat(). E.g. "ls -l" needs to do an additional stat(). Likewise for coloring output etc. Furthermore, ls defaults to sort the output. Better use the -U option to stick to "directory order". The option -f includes -U. For find, it is the same: did you only run 'find'/'find -print' or 'find -ls'? The former 2 should should be quite fast while the latter also requires a stat(). Finally, it depends on the file system type: e.g. NFS is known to become really nasty with so many files in a directory. Have a nice day, Berny -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org