Bernhard Voelker wrote:
On 10/30/18 9:18 PM, Per Jessen wrote:
Per Jessen wrote:
Anton Aylward wrote:
What you really want is "read one/print one" or "read one/process one"
Yep and that is more or less what the code does - do one getdents(), process it, rinse, repeat. With that I can list 3+ million files in minutes.
That is exactly what find does by using gnulib's FTS: it reads a certain number of entries, processes them, and then reads the next ones until all are done (or find terminates otherwise).
Hi Berny That sounds like what I am asking for - how do I make 'find' do that?
Okay, I was perhaps a little enthusiastic here. "minutes" yes, but more like 120+. I'm at 3291912 files and counting, around July 2015. Still, the system remains responsive, processing, database is running, much much better than with 'find'.
I guess that you added an option like -mtime that requires an additional stat(), or the action does (like -ls). You didn't write what file system type you are using, did you?
File system is JFS - it's likely I used -mtime on the attempt that hung up the entire system, yes. Using my own bit of code, I went through a directory with 10715698 files in 629minutes, including a stat() call. The time is really not important, the key thing is the system kept running and remained responsive.
After all, this seems to be a very special action on that directory. What will you do with that list? I mean you'll most probably need yet another round to move or delete the files ...
Yes, most of them will be deleted - being careful and doing them one by one will take a couple of days maybe, but again, the system will keep running. I'm currently deleting 3037712 files, that has been running for about 16 hours now. -- Per Jessen, Zürich (4.1°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org