On Fri, Aug 28, 2015 at 11:03 AM, Michal Kubecek <mkubecek@suse.cz> wrote:
On Friday 28 of August 2015 10:34:48 Navin Parakkal wrote:
On Fri, Aug 28, 2015 at 9:20 AM, Jeff Mahoney <jeffm@suse.com> wrote:
1) Inertia
i'll skip this one.
Well, you shouldn't. To change things, one should have good reason. The more intrusive the change, the stronger the arguments for it should be. Unless there is a substantial gain, the change is not worth the effort and the risk.
2) Our testing hasn't shown any clear winner under all workloads between the two allocators and our mm experts have many years of experience working with the slab code.
https://oss.oracle.com/projects/codefragments/src/trunk/bufferheads
Once you allocate around 200M+ objects using this buffer.ko. The you insmod https://oss.oracle.com/projects/codefragments/src/trunk/fragment-slab/ and do fragment of 8192 > /proc/test
I would rather appreciate a real life scenario showing SLUB performing (significantly) better, not an artificially crafted extreme testcase.
Well i have to admit and agree here that after 800 million + buffer head objects , RHEL 7.1 with CONFIG_SLUB=y with same hardware configuration ie 32 cpus and 128GB is also getting into soft lockup. But CONFIG_SLAB is hits this faster at like 200+ million range of objects. Well for real lice scenario , if you have access to suse customer bug reports, you can lookup on the below, there are some details. [Bug 936077] L3: Watchdog detected hard LOCKUP . Michal has provided the details. What worries me is that the comment you shouldn't touch /proc/slabinfo. slabtop certainly does. It would be good if someone can tell me where SLAB performs better than SLUB and vice versa and what is the common between them that could be causing this. Maybe some bit of learning. Regards, Navin -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-kernel+owner@opensuse.org