Comment # 3 on bug 1107617 from
(In reply to Franck Bui from comment #1)

We both weren't at SUSE when the factor C=64 was introduced. I thought that
what you were afraid of was breaking small systems, for which I believe this
factor was introduced (bug 944812?). I think the risk of regressions for small
systems is smaller with K=32. Check out the table above for numbers.

BUT: If you prefer to take the upstream formula 1:1, fine with me. I just
wanted to point out a possible compromise.

(Wrt the short-lived tasks: a few ms or even us is not short-lived enough if
the kernel generates uevents very quickly, which it does during coldplug.
Therefore I think that the limit does matter).

> > *HOWEVER*, we have indeed seen systems with >10000 workers,
> BTW what kind of HW can generate so many events ?

See bug 1103094, comment 121: There are >3000 memory devices (DIMMs) alone. For
the system in question, c = 1536, m = 32TiB. The resulting children-max limit
is ~99000 with the SUSE formula, and still 3088 with the upstream formula.  
Extrapolate a few years from here, and you'll see systems that reach
children-max > 10000 even with the upstream estimate, and will probably also
have this many sysfs device nodes.

So while we are at this, we should also consider (and perhaps discuss with
upstream) the absolute upper limit. Something "in the order of a few thousand"
seems appropriate. I've asked the partner in bug 1103094 to check whether his
system is stable with udev.children-max=3088. That information may be a
starting point.


You are receiving this mail because: