On Fri, Aug 28, 2015 at 9:20 AM, Jeff Mahoney <jeffm@suse.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 8/27/15 10:04 PM, Navin Parakkal wrote:
Hi, I would like to know why SUSE through SLES and openSUSE still uses CONFIG_SLAB as the allocator instead of CONFIG_SLUB ?.
I've seen worst case scenarios locking up buffer_head when there is fragmentation and the number of slabs are in millions.
Any particular case where it performs better than SLUB . I'm not investigating on low end systems where SLOB is another option.
Mel can probably comment further, but there are two answers.
1) Inertia
i'll skip this one.
2) Our testing hasn't shown any clear winner under all workloads between the two allocators and our mm experts have many years of experience working with the slab code.
https://oss.oracle.com/projects/codefragments/src/trunk/bufferheads Once you allocate around 200M+ objects using this buffer.ko. The you insmod https://oss.oracle.com/projects/codefragments/src/trunk/fragment-slab/ and do fragment of 8192 > /proc/test I found that on RHEL 6.6 which use CONFIG_SLAB=Y it had cpu stuck on a 128 GB phys RAM and 32 cpu box. But on Centos 7.1 which uses CONFIG_SLUB=Y , i didn't notice the problem of cpu getting stuck or that message in dmesg. i did insmod buffer.ko then i did 987654321 > /proc/test , it allocated around 400 million + objects in buffer_head as per /proc/slabinfo. I might have done something wrong or there could be other things that are missing. It would be of much help and good learning if you can help. Regards, Navin -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-kernel+owner@opensuse.org