On Tue, 30 Oct 2018 17:36:09 +0100
Per Jessen
This might be file system dependent, I'm not sure. I've been doing some tidying up and got stuck on a few directories with millions of files in them. 3+ million per directory. Doing a 'find' takes a very long time and also essentially chokes the system. I ended up writing a small utility using getdents() instead, much faster and the system remains operational.
I was just wondering if e.g. 'find' or 'ls' had some options that would limit the scope ? (not mtime etc).
I don't know the answer to your question, but I'm interested in it, since I used to have a lot of directories like that and just learned not to do an ls on them :( I never tried find because the names were systemised and I had an index of them. Oh, and yes, it is filesystem dependent but still bad news in all the ones I tried. I believe ls uses readdir() rather than getdents(). Did you try that and/or does your faster program work with it instead? I'd be interested to try to track down what it is that makes ls unusably slow in these circumstances. Maybe it's calling stat() or building in-memory structures for sorting the names or somesuch that causes the slowdown. If you're willing to post the source of your utility or email it, I'll have a play. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org