Thomas Blume changed bug 911347
What Removed Added
CC   tchvatal@suse.com
Flags   needinfo?(bpesavento@infinito.it), needinfo?(tchvatal@suse.com)

Comment # 12 on bug 911347 from
(In reply to Bruno Pesavento from comment #11)
>
> I still think that limiting SystemMaxFileSize= well under
> DEFAULT_MAX_SIZE_UPPER (I would set it to 32 MiB on my test system) prevents
> most collateral damage on slow systems and has no adverse effects, to my
> understanding.
> Then the defaults will take care of small systems, trying to leave some free
> space on /var anyway.

Hm,  https://bugzilla.redhat.com/show_bug.cgi?id=1006386 comment#84 indicates
that a small MaxFileSize will hurt the journal efficiency.
However, it doesn't seem that we will loose logs therewith.
And 32Mb is 3 times more than suggested in the RH bug.
Maybe we won't have too bad negative effects therewith.
Does SystemMaxFileSize=32M also help if you have SystemMaxUse unset?

> Fragmentation is almost nonexistent in my "productive" laptop so far
> (Tumbleweed, SSD, EXT4, 30GB root/).
> So, waiting for the designers to solve the root problem, I'm not
> complaining: the test disk is going to be wiped by the next RC anyway.
> But even halving the times I'm witnessing, I think that many laptops more
> than 5 years old are going to hit troubles with current defaults.

Sure, we should continue pursue this, independently of the  SystemMaxFileSize
settings.
Can you confirm that you see the fragmentation and the long journal flushes
only on btrfs?
If so, I should probably open a separate btrfs bug.

> Feel free to ask for more testing if needed.

Thanks, would be good if you could also test the other fix, e.g. edit
/etc/sysctl.d/99-sysctl.conf and put:

net.unix.max_dgram_qlen=1000

there.
Does this has any further influence on the journal flush time?

Tomas, could you also test all the above?
I really would need testing on more hardware here.


You are receiving this mail because: