On 10/31/18 8:10 AM, Per Jessen wrote:
Bernhard Voelker wrote:
That is exactly what find does by using gnulib's FTS: it reads a certain number of entries, processes them, and then reads the next ones until all are done (or find terminates otherwise).
Hi Berny
That sounds like what I am asking for - how do I make 'find' do that?
That needs to be the 'ftsfind' binary in older versions, starting with 4.3.
I guess that you added an option like -mtime that requires an additional stat(), or the action does (like -ls). You didn't write what file system type you are using, did you?
File system is JFS - it's likely I used -mtime on the attempt that hung up the entire system, yes.
I think JFS should be fine - I just wanted to rule out networking or ancient file systems etc. Well, you could do much about it anyway.
Using my own bit of code, I went through a directory with 10715698 files in 629minutes, including a stat() call. The time is really not important, the key thing is the system kept running and remained responsive.
Re. responsiveness: I'd guess that depends on the implementation of the file system in the kernel. Of course, there is 'ionice' which could tell the system to avoid overly eagerness.
After all, this seems to be a very special action on that directory. What will you do with that list? I mean you'll most probably need yet another round to move or delete the files ...
Yes, most of them will be deleted - being careful and doing them one by _______^^^^^^^^^^^^^^^^^^^^^^^^^^^^
What is the criteria? Can you decide by filename? Then you could generate a list of commands like shown in the other email with 'find ... -printf'. Last-minute idea: Another option would be to move out the "good" files you want to keep, and then delete the whole directory. ;-) Have a nice day, Berny -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org