Re: [suse-security] Reasons for a system to freeze?
Hello, again... I think that it all has to do with the Linux kerne's policy of "overcommitment" of VM. There have been *many* long discussions of this on the linux kernel mailing list -- if you want to get *all* of the gory details plus *a lot* more, go do a search for "linux overcommit". The short answer is that the kernel *intentionally* will give applications more virtual memory than really exists. The argument goes that in a great many cases, the application never uses most a lot of that memory. Doesn't make sense? Try this example: "BigApplication" (i.e. emacs or netscape) has allocated 100M of RAM. User does something that causes it to run a sub-process "/bin/ls". BigApplication calls "fork()" and then the child process calls "exec()". If you the kernel *didn't* overcommit, and your didn't have another 100M of VM, the "fork()" would fail, even though several lines of code later the child process exec's "ls" which needs *much less* VM than BigApplication. There are other scenerios, as well, where overcommitment is a *good thing*, but it does have it's downsides, too... The 2.4 kernel has a built-in "out of memory" killer which attempts to kill the process(s) that are sucking the life out of the system. I think that in 2.4.3 it's getting pretty good. I think that the behaviour of Solaris and other kernels is somewhat different -- they don't overcommit, but handle VM in a very different way, but I don't remember the details. Many on the l-k list contend that the Linux way provides better performance, etc., BTW. Later -Nick
On Sun, 22 Apr 2001, Ashley wrote:
I would like to see your script. It sounds useful. Perhaps even more than $0.02 worth.
On Fri, Apr 20, 2001 at 07:42:32PM +0600, Nick LeRoy wrote:
Hello...
Just thought I'd add my $0.02 worth in...
I *always* run netscape from a script which uses ulimit to set the amount of memory it can get. If I don't, it sometimes sucks the life out of the machine in similar ways by using all memory. With it ulimitted, it dies after a while (when it can't get any more memory), but my system still lives on. I've been doing this for serveral years.
Hello, I would like to know, if this is Linux specific, or am I able to crash (or freeze) any Unix (Solaris for example) just by eating a lot of memory? Where does this problem come from? When a programm calls a malloc (or something like that), I think the system should return an error, if there is not enough memory. It seems that Linux does *not* return an error (or only too late). Am I right or wrong? Regards, Peter
-- Peter Münster <A HREF="http://notrix.net/pm-vcard" TARGET="_new"><FONT COLOR="BLUE">http://notrix.net/pm-vcard</FONT></A>
--------------------------------------------------------------------- To unsubscribe, e-mail: suse-security-unsubscribe@suse.com For additional commands, e-mail: suse-security-help@suse.com
-- Get your free email from www.linuxmail.org Powered by Outblaze
participants (1)
-
Nick LeRoy