-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Wednesday 2005-08-31 at 11:46 -0400, Jerry Feldman wrote:
Remember that Alfred, the OP, already said that tar failed: |> I tried to put it all in a tarball - after 5 days I stopped it. The |> tarballwas 2 GB at that time.
He is talking of more than a million files (10^6) in a single directory, each file about 20 Kb.
I might possibly suggest trying to use rsync (but I'm afraid that it might also fall over).
mc (midnight comander) doesn't fail: it runs as a turtle, but doesn't fail; so rsync "might" work.
Another (although horrible) suggestion might be to use find.
Assume that the source directory is mounted as /usr/localfoo and the target directory is mounted on /mnt: cd /usr/local/foo find . -type f -exec cp -p {} /mnt \; This is assuming that /usr/local/foo does not have subdirectories. You can set appropriate flags in find to prevent recursion. Another possibility is to partition the output into a number of subdirectories: Assume that you may have directories: /mnt/a, /mnt/b, ... find . -name \[aA\] -exec cp -p {} /mnt/a \;
The problem is that the moment the shell tries to expand the directory listing to give it to the command line, it fails, because the commnad line becomes huge and overflows the buffer. It is possible to work on a single file if the name is known before hand, delete, move, whatever. And perhaps on a bunch of them if the bunch is sizeable, I don't know. I'm tempted to try and create a big directory, but... :-?
In any case, as I mentioned, my solutions are not very good.
Note that a 64-bit system might handle this better.
I don't think so, it is a bash limit, not an architecture limit. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFDFfQdtTMYHG2NR9URAjPPAJkBzAIZokgZl1IlfhBjyK5/lPvBWgCgjr9E 8g0jM2bgV78zSGafxlrH+ow= =zkjD -----END PGP SIGNATURE-----