Comment # 7 on bug 1199970 from
From the tracing data, it is clear that with mb_optimize_scan=1 jbd2 does
considerably more IO:

mb_optimize_scan=0:
Stats for process [jbd2/sdb1-8] (36694)
Queued writes 657489 (2629956 KB)

mb_optimize_scan=1:
Stats for process [jbd2/sdb1-8] (35727)
Queued writes 745845 (2983380 KB)

The number of commits is actually somewhat lower with mb_optimize_scan=1:

commits mb_optimize_scan=0: 26367
commits mb_optimize_scan=1: 25582

So commits are considerably larger with mb_optimize_scan=1. The load dirties
only inodes, block & inode bitmaps. So likely with mb_optimize_scan=1 we spread
processes more which results in dirtying. In theory each process can dirty upto
6 blocks per commit (unlink + create can each dirty one block bitmap, one inode
bitmap, and one inode table block). Given we have 16 processes, this can result
in commits upto 96 blocks large. mb_optimize_scan=0 average is 23.9 blocks per
commit, mb_optimize_scan=1 average is 28.1 blocks per commit. And indeed
counting groups touched in each commit shows that with mb_optimize_scan=1 we
are indeed touching more groups per commit.

Now I have to think whether this wider spreading of processes is a desirable
thing or not...


You are receiving this mail because: