(Another repeat response for some due to using the wrong email address again) On Fri, Aug 28, 2015 at 12:01:11PM -0700, Linda A. Walsh wrote:
Michal Kubecek wrote:
On Friday 28 of August 2015 10:34:48 Navin Parakkal wrote:
On Fri, Aug 28, 2015 at 9:20 AM, Jeff Mahoney <jeffm@suse.com> wrote:
1) Inertia i'll skip this one.
Well, you shouldn't. To change things, one should have good reason. The more intrusive the change, the stronger the arguments for it should be. Unless there is a substantial gain, the change is not worth the effort and the risk.
==== Suse already changed this. The onus proving the use of the outdated tech is on SuSE as they stopped using the default since 2.6.23.
That assumes that the upstream default was changed for a compelling reason -- it wasn't. The commit that enabled it by default simply read "There are some reports that 2.6.22 has SLUB as the default. Not true! This will make SLUB the default for 2.6.23." It was asserted at the time, by the author of SLUB, that slab had inherent unfixable flaws and that it needed to go away but there never was a universal consensus on this. Around the same time, SLUB was found by multiple independent tests to be slower than SLAB. There were attempts to address some limitations in the SLUB implementation (https://lwn.net/Articles/311502/) but they were never finished as the author moved on.
From stackoverflow:
Slab is the original, based on Bonwick's seminal paper and available since Linux kernel version 2.2. It is a faithful implementation of Bonwick's proposal, augmented by the multiprocessor changes described in Bonwick's follow-up paper[2].
Slub is the next-generation replacement memory allocator, which has been the default in the Linux kernel since 2.6.23. It continues to employ the basic "slab" model, but fixes several deficiencies in Slab's design, particularly around systems with large numbers of processors. Slub is simpler than Slab.
What should you use? Slub, unless you are building a kernel for an embedded device with limited in memory. In that case, I would benchmark Slub versus SLOB and see what works best for your workload. There is no reason to use Slab; it will likely be removed from future Linux kernel releases.
==============
I.e. if suse's aiming at small embedded devices/or optimizing it for toasters and microwave ovens, SLAB is recommended.
A stack overflow post supported by no data is not the basis for making a decision on what small object allocator to use in the kernel. SLOB is an allocator that may be designed for a toaster or a microwave oven. SLAB has been and currently is used on machines with >= 1024 CPUs. It is extremely rare to identify a workload where SLAB is the limiting factor except in the case where the debugging interfaces are used aggressively. I say extremely rare because I'm not aware of one. The only case where I heard that SLUB mattered was on specialised applications that heavily relied on the performance of the SLUB fast path and *only* the SLUB fast path. That scenario does not apply to many workloads.
But if suse wants to support computers with *more processors* (seems to be the trend for most customers), then SLUB is recommended.
A recommendation by stack overflow is rarely an authorative source. The author of SLUB used to consistently assert that it SLUB was the only way that more processors could be effectively supported but this was rarely supported by independent tests. For example, SLUB became the default in 2007. In 2010, tests by a Google engineer indicated that SLUB was 11.5% to 20.9% slower than SLAB (https://lkml.org/lkml/2010/7/14/448). Five years later, the same engineer states that SLUB is still problematic for workloads they care about (https://lkml.org/lkml/2015/8/27/559). I know at the time I briefly looked at the SLUB performance and also found it to be much slower but did not do anything the information as I was not working on distribution kernels at the time.
So to turn the inertial question around, why GOOD reasons does Suse have for staying with a memory allocator designed for single procesor-embeded devices vs. larger multi-core systems?
I do not speak for SUSE so do not consider this an official response but here is my take on it; In the 8 years since SLUB became the default, there has been a significant number of commits aimed at improving the performance of SLUB. According to my own research, this 8 years of effort has brought the performance of SLUB *in line* with SLAB and it rarely significantly exceeds it except in microbenchmarks. I know that vendors for very large machines have conducted their own performance evaluation of the kernel and in that time to the best of my knowledge, SLAB has not been identified as a limiting factor. I'm not aware of a compelling reason to pick one over the other at the moment but the fact that SLUB was known to have severe performance regressions 3 years after it became the default means that openSUSE staying on SLAB was probably the correct choice. -- Mel Gorman SUSE Labs -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-kernel+owner@opensuse.org