On 2014-04-27 21:12, Anton Aylward wrote:
On 04/27/2014 02:54 PM, Carlos E. R. wrote:
a) The test for "regular file" is needed, because the find, despite using "find "$DONDE" -type f ..." finds some directories, and thus I get some errors later:
Possibly. Possibly not.
The thing is that find produces a stream
So if you have a file with the path
$HOME/Long Directory name/even longer file name dot text
then you will get the following path names
Long Directory name even longer file name dot text
What you need is to use "-print0" and "xargs -0"
No, that's not a problem. The output of find is saved to a text file, which thus contains one entry per line. For your sample above, I would get this line: /home/cer/Long Directory name/even longer file name dot text Then I use: while read FILES ; do echo "$FILES" # or whatever. done < text_file_containing_list Whitespace is not a problem this way. That's the reason I'm not using pipes, sometimes makes life easier and I can examine the intermediate steps with less, or even edit it, or reprocess that step with a different command without going the entire thing again. The original code was this one liner: find / -type f -print0 | xargs -0 file \ | awk '/Bourne-Again/{print $1}' | tr -d ':' \ | xargs -r grep -D skip SEARCHSTRING | less -S The problem is that it is terribly slow, several hours. One reason is that it explores absolutely all paths, like "/proc". That could be avoided with the -prune syntax. But another is that "file" often examines the full file, and some are huge. Even if it were a million files of 1 megabyte each, it means reading one terabyte bytes. Which why I'm trying the "head -c 1000" trick. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)