Dave Howorth wrote:
On Tue, 30 Oct 2018 17:36:09 +0100 Per Jessen
wrote: This might be file system dependent, I'm not sure. I've been doing some tidying up and got stuck on a few directories with millions of files in them. 3+ million per directory. Doing a 'find' takes a very long time and also essentially chokes the system. I ended up writing a small utility using getdents() instead, much faster and the system remains operational.
I was just wondering if e.g. 'find' or 'ls' had some options that would limit the scope ? (not mtime etc).
I don't know the answer to your question, but I'm interested in it, since I used to have a lot of directories like that and just learned not to do an ls on them :(
I was planning to run a 'find' this Friday after 1800, but then I got annoyed and started looking for a real solution.
I believe ls uses readdir() rather than getdents(). Did you try that and/or does your faster program work with it instead?
afaict, both find and ls use readdir() and that is the problem.
I'd be interested to try to track down what it is that makes ls unusably slow in these circumstances. Maybe it's calling stat() or building in-memory structures for sorting the names or somesuch that causes the slowdown.
My code calls stat() as it goes along, it's still perfectly useable.
If you're willing to post the source of your utility or email it, I'll have a play.
I borrowed it from here: http://man7.org/linux/man-pages/man2/getdents.2.html -- Per Jessen, Zürich (5.6°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org