We are using JFS for nearly 10 years without any problems. But we don't use on our file server (we are using a GFS system). Personally I would prefer it over ext-X, xfs, and all others. It's stable and fast. It would be interesting to see how JFS response, when you repair a 30TB partion. Bye the way, does it make sense to use this amount of data on a single server ? Why don't you look for GFS or Cluster file systems ? Bye, Peer Martin Jungowski wrote:
Use XFS. The limit there is 8 EiB and I would think it offers you all the features you are currently expecting from EXT4 (and more).
-Daniel
Daniel, thanks for your suggestion. I've tried XFS before and we have ruled it out for one simple reason: while creating a 27TB slice worked perfectly fine and was finished within seconds it's impossible to repair an XFS filesystem this large. xfs_check crashes immediately (out of memory), xfs_repair hogs all available memory & swap and brings the machine to a screeching halt. We gave up after 72 hours and had to use the reset button for the first time in years.
Since the tools don't seem to be fixed yet (I wonder what "high priority" could possibly mean if that statement was issued more than six months ago and nothing has been done yet) and it thus seems to be impossible to create an Ext4 filesystem larger than 16TB with a block size of 4096 my next question would be: has anyone ever tried a 8192 block size? Do all the filesystem repair/check tools still work? Is there anything that can possibly go wrong?
Martin
-- Mit freundlichem Gruß Peer-Joachim Koch _________________________________________________________ Max-Planck-Institut fuer Biogeochemie Dr. Peer-Joachim Koch Hans-Knöll Str.10 Telefon: ++49 3641 57-6705 D-07745 Jena Telefax: ++49 3641 57-7705