http://bugzilla.suse.com/show_bug.cgi?id=1165351 http://bugzilla.suse.com/show_bug.cgi?id=1165351#c20 --- Comment #20 from Thorsten Kukuk <kukuk@suse.com> --- (In reply to Luis Chamberlain from comment #19)
(In reply to Thorsten Kukuk from comment #17)
As far as I understand the code, no, this will not work. To be fast they are doing something which is very fast, but also very memory consuming: have a table for all possible file descriptors to lookup the data instead of going through a list of existing file descriptors.
Let us think proactively as well, if this was done, was it inspired by another codebase? Who else might be doing similar practice?
Of course, this was common best practice 25 years ago if you needed performance.
If you set the limit lower than possible, you could get a out of array access. The code was written at a time, when 1024 file handles was the maximum possible, not for that big numbers as of today.
chivo:~/:[0]# cat /proc/sys/fs/file-max 9223372036854775807
That's why this is no good practice today anymore and needs to be rewritten. -- You are receiving this mail because: You are on the CC list for the bug.