Question: If I were to build a NAS box using Linux, what would be the prefered protocol for file transfer? Would rsync be fast enough? Or should I consider scripting ftp or something else? I know smb is too slow for large amounts of data. I'm wanting to do disk-to-disk backups for about 300GB. Thoughts? <<JAV>>
NFS is the most used protocol in NAS devices El Jueves, 2 de Septiembre de 2004 15:38, Joe Polk escribió:
Question: If I were to build a NAS box using Linux, what would be the prefered protocol for file transfer? Would rsync be fast enough? Or should I consider scripting ftp or something else? I know smb is too slow for large amounts of data. I'm wanting to do disk-to-disk backups for about 300GB. Thoughts?
<<JAV>>
NFS is the most usefull protocol using NAS with UNIX or LINUX El Jueves, 2 de Septiembre de 2004 15:38, Joe Polk escribió:
Question: If I were to build a NAS box using Linux, what would be the prefered protocol for file transfer? Would rsync be fast enough? Or should I consider scripting ftp or something else? I know smb is too slow for large amounts of data. I'm wanting to do disk-to-disk backups for about 300GB. Thoughts?
<<JAV>>
Joe wrote regarding '[SLE] Homebrew NAS device' on Thu, Sep 02 at 08:37:
Question: If I were to build a NAS box using Linux, what would be the prefered protocol for file transfer? Would rsync be fast enough? Or should I consider scripting ftp or something else? I know smb is too slow for large amounts of data. I'm wanting to do disk-to-disk backups for about 300GB. Thoughts?
I use rsync for nightly snapshots of 6 machines comprising about 260GB of data - most of which is html pages and individual email messages (using Maildir). The whole process consists of making a hardlinked copy of the targed directory, then updating the target dir from the server root using rsync over ssh. Between the perl script that does the linking and the rsync step, the process takes about 3-4 hours most nights. Most of that delay is from rsync building an in-memory list of several million files, so it seems. I know that I can transfer that amount of data in a tarball quite a bit quicker. So, the answer somewhat depends on the data. If the data consists of several large files, then rsync will be great - though you probably want to initiate from the client side and run an rsync server on the NAS. If you need to move a whole lot of small files, then you may encounter less-than-stellar performance from rsync. If I didn't need rsync for other reasons, I'd probably just open up a netcat session on each side of the link and dump tar into / out of the pipe (that's amusing read out of context). As the other person said, NFS mounting the NAS and copying the files might work well, too, but there again it depends on your situation. If you're just doing a one-shot thing, a full blown network filesystem probably isn't the right choice. If you wanna update several times through the day, then NFS may be what you need. I dislike NFS for performance needs, though, so take that as you may. :) --Danny
participants (3)
-
Danny Sauer
-
Francisco Javier Lopez
-
Joe Polk