Pardon, the first reply was signed :-( --------------- Hi, pardon me for jumping in ;-), but I have had an own experience regarding exhausted memory with kernel 2.4.3. I hope that this is not OT, it's not regarding access-security, but reliability. My private box is Suse 7.1 with kernel 2.4.3, 128 MB RAM, 64 MB swap (this may be odd, but anyway, normally it works). I ran a recent version of a WAV Editor (The Art of Noise) to edit a 70MB WAV File. In the first instance it ran smooth, but when I opend a second instance it consumed all RAM while loading the WAV, then also all swapspace, and when the overcommitment was takting effect and all VM was exhausted, the box was fully unusable, constantly swapping, paging and so on, nearly frozen (besides heavy diskactivity). I had to switch it off. No sign of an OOM-Killer. I do not use any ulimits (maybe I have to establish it). In my opinion not a reliable allocation scheme for an OS used to run a server, only for a workstation where such behaviour could be easily tolerated. Only my personal opinion, no public flamewars please ;-) My 2 cents. Stefan Hoffmeister wrote:
: On Sun, 22 Apr 2001 18:39:23 +0200 (CEST), Peter Münster wrote:
I would like to know, if this is Linux specific, or am I able to crash (or freeze) any Unix (Solaris for example) just by eating a lot of memory?
"Unprotected" systems will at least suffer from DOS attacks.
Where does this problem come from?
Lack of ulimit'ing.
When a programm calls a malloc (or something like that), I think the system should return an error, if there is not enough memory.
When is "not enough memory" really "not enough memory"? If you fork an application with a code size of 5 MB a hundred times - do you need 5 MB of "memory", or do you need 500 MB?
Well, Linux thinks that in the above scenario 5 MB of memory are perfectly sufficient, even if your system only provides a total amount of virtual memory of 100 MB.
IOW, Linux overcommits on the amount of memory it actually has - it will give you an infinite amount of memory (relatively speaking :->) while it only has a fixed amount of backing virtual memory at its disposal.
It seems that Linux does *not* return an error (or only too late).
Current released kernels (AFAIK) will always overcommit. If a situation arises where more virtual memory is required than the system can deliver, a process is being killed ("OOM killer").
I believe that recently an experimental patch has been published that will allow turning off of overcommitting. I, for one, would immediately turn overcommitting off, if it finds its way into the kernel.
--------------------------------------------------------------------- To unsubscribe, e-mail: suse-security-unsubscribe@suse.com For additional commands, e-mail: suse-security-help@suse.com