Danny, On Saturday 26 February 2005 09:15, Danny Sauer wrote:
On Saturday 26 February 2005 10:33 am, Randall R Schulz wrote:
Danny,
On Saturday 26 February 2005 08:33, Danny Sauer wrote:
On Friday 25 February 2005 04:16 pm, Randall R Schulz wrote: [...]
(*) The one weakness I've experienced more than any other on my SuSE Linux system is its vulnerability to a rogue process consuming so much memory that everything else gets swapped out and it becomes impossible to even kill the errant process.
Clearly, you need more memory. :) Most modern system will accept 2GB, if not 4 or more. You should have time to kill acroread before it fills up 2GB of physical memory.
I have 1 GB. Brute force cannot be the right way to address this problem.
Maybe you have too much memory, then. The only machine I've ever had that problem with is a machine with 128MB physical and 512MB swap (and a particularly leaky server daemon, though I've yet to identify precisely which one - the machine's running SuSE 5.2 and really should just be updated, so I'm not investing time in fixing problems). :) Well, my 1.5GB machine hasn't had that problem, either. It must be you. :)
The upshot is that this is a genuine vulnerability that cannot be solved by throwing memory at the system.
Well, if you're gonna make this a serious response, how about by implementing per-process memory limits?
I'm not the one who signs every message with a cute slogan. Of course I'm serious. And I have considered using limits and I know of the ulimit built-in for BASH. But that's really neither here nor there, because only rarely are these programs started via a command submitted to a shell. To be genuinely helpful, I need something with a wider scope than a limit set in a shell.
man bash, search for ulimit - presuming you're using bash. ...
--Danny, who doesn't set limits, largely because of laziness
Randall Schulz