Per Jessen wrote:
No, that's true, that's not an issue here either - I was out looking for reason why directory access seems to have slowed down over time, and just happened on this apparent anomaly.
I don't suppose you had a look at the fragmentation level as I last mentioned? < Does JFS have any utilities to tell you how fragmented the
files, directories and free space is? ... 13% is getting close to space exhaustion -- 10-15% is usually the lower limit, though in practice I find it's best to keep drives below 75% usage for continued fast performance (not that I always do that, mind you, but when possible) -- if it is a mostly "read-only" drive, then lower amounts of free space are usually fine...
Older file systems usually limited the amount of user-usable space to 85-90% of capacity, because various algorithms -- like finding free space, or trying to map space contiguously, slow down when you get below 10-15% of space.
I have a 600Gb filesystem mounted on /var - key directories under /var are:
/var/spool/postfix-in/{incoming,active} with 2 levels of hashing. On a busy day or after delays, each subdir might easily reach 10.000 files. 99% small files of less than 100Kb. (emails).
/var/spool/elsewhere/dir{0,1,2,3...1000}/maildirs - each such maildir might have 100.000 files.
So you are creating/destroying what? 10k files/day or 100k files/day? what? If you've been running that file system @ 85-90% full for some time and lots of file creates and destroys, most old file systems aren't good about consolidating space (like not shrinking directories that would free up 3mg of space). Few file systems have any sort of defragment utility, and for what we are talking about here, (defragmented free space), I don't know of any linux file systems that have such -- but most could be helped by dumping the disk contents to another disk, then, since some file systems aren't good about cleaning up and consilidating free space -- (leaving random bits of allocated space laying around...), it might be most prudent to re-mkfs your file system. rsync isn't the best tool to copy off files -- you might try 'cp -au', for example, -- will only copy files with newer times (unless your target machine is far away over a slow link). If you really need to slow rsync down, besides doing both: 1) nice -19 ionice -c3, you could limit rsync to only run on 1 of the cpu's (normally since it has a client and sender they'll more often than not fall onto separate cores. So: 2) sudo nice -19 ionice -c3 taskset -c1 rsync.... 3) There's also the "--bwlimit" switch to rsync, but I doubt that bw is your problem, -- likely rsync is having to alot of seeking and comparing of metadata. Beyond that .. it gets hairier to control things. doable, but hairier. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org