http://bugzilla.suse.com/show_bug.cgi?id=1165351 http://bugzilla.suse.com/show_bug.cgi?id=1165351#c28 --- Comment #28 from Thomas Blume <thomas.blume@suse.com> --- (In reply to Thomas Blume from comment #26)
Looking for some more background I've found this:
https://www.shrubbery.net/solaris9ab/SUNWdev/ONCDG/p27.html
--> The __rpc_dtbsize() function calls the getrlimit() function to determine the maximum value that the system can assign to a newly created file descriptor. The result is cached for efficiency. --<
If that's the purpose I would assume that:
return (tbsize = (int)rl.rlim_max);
is correct. So maybe we just need to subtract something from dtbsize in case it has an insane value?
There is already a patch to address an integer overflow in dtbsize, see: https://bugzilla.redhat.com/show_bug.cgi?id=1600284 (libtirpc git commit 56b780e61ed4bae8b728a600fc5ac8052d0d3582). --> + if ( (size_t) dtbsize > SIZE_MAX/sizeof(cond_t)) { + mutex_unlock(&clnt_fd_lock); + thr_sigsetmask(SIG_SETMASK, &(mask), NULL); + ce->cf_stat = RPC_SYSTEMERROR; + ce->cf_error.re_errno = EOVERFLOW; + goto err; + } --< Maybe we could add a variable for the available memory to this calculation? I guess an out of array access becomes irrelevant when the system runs out of memory. Thorsten, I'm just collection ideas before discussion that upstream, would be good if you could give your opinion. -- You are receiving this mail because: You are on the CC list for the bug.