: On Sun, 22 Apr 2001 18:39:23 +0200 (CEST), Peter Münster wrote:
I would like to know, if this is Linux specific, or am I able to crash (or freeze) any Unix (Solaris for example) just by eating a lot of memory?
"Unprotected" systems will at least suffer from DOS attacks.
Where does this problem come from?
Lack of ulimit'ing.
When a programm calls a malloc (or something like that), I think the system should return an error, if there is not enough memory.
When is "not enough memory" really "not enough memory"? If you fork an application with a code size of 5 MB a hundred times - do you need 5 MB of "memory", or do you need 500 MB? Well, Linux thinks that in the above scenario 5 MB of memory are perfectly sufficient, even if your system only provides a total amount of virtual memory of 100 MB. IOW, Linux overcommits on the amount of memory it actually has - it will give you an infinite amount of memory (relatively speaking :->) while it only has a fixed amount of backing virtual memory at its disposal.
It seems that Linux does *not* return an error (or only too late).
Current released kernels (AFAIK) will always overcommit. If a situation arises where more virtual memory is required than the system can deliver, a process is being killed ("OOM killer"). I believe that recently an experimental patch has been published that will allow turning off of overcommitting. I, for one, would immediately turn overcommitting off, if it finds its way into the kernel.