On 30/10/2018 19.30, Per Jessen wrote:
Anton Aylward wrote:
On 30/10/18 12:36 PM, Per Jessen wrote:
I was just wondering if e.g. 'find' or 'ls' had some options that would limit the scope ? (not mtime etc).
Well, I'd start with the "-f -- do not sort" option. Sorting means slurp up *everything* into memory, which is going to involve a lot of virtual memory work and probably paging to get there.
It's not memory that is the problem - the box has 64Gb. I tried 'ls -f', made no difference although I did not let it finish - the last 'find' ran for 14 hours before I had to stop it.
What you really want is "read one/print one" or "read one/process one"
Yep and that is more or less what the code does - do one getdents(), process it, rinse, repeat. With that I can list 3+ million files in minutes. It just seems to me 'ls' and 'find' ought to be able to do the same.
Have you tried a filebrowser? I'm just curious. Maybe 'mc'. Another thing to try is magnetic vs ssd disk, see if it is iobound. Huh, no, you said a different library call is faster. -- Cheers / Saludos, Carlos E. R. (from 42.3 x86_64 "Malachite" at Telcontar)