On Saturday 26 February 2005 11:14 am, Randall R Schulz wrote:
Danny,
On Saturday 26 February 2005 09:15, Danny Sauer wrote:
On Saturday 26 February 2005 10:33 am, Randall R Schulz wrote:
Danny,
On Saturday 26 February 2005 08:33, Danny Sauer wrote:
On Friday 25 February 2005 04:16 pm, Randall R Schulz wrote: [...]
(*) The one weakness I've experienced more than any other on my SuSE Linux system is its vulnerability to a rogue process consuming so much memory that everything else gets swapped out and it becomes impossible to even kill the errant process.
Clearly, you need more memory. :) Most modern system will accept 2GB, if not 4 or more. You should have time to kill acroread before it fills up 2GB of physical memory.
I have 1 GB. Brute force cannot be the right way to address this problem.
Maybe you have too much memory, then. The only machine I've ever had that problem with is a machine with 128MB physical and 512MB swap (and a particularly leaky server daemon, though I've yet to identify precisely which one - the machine's running SuSE 5.2 and really should just be updated, so I'm not investing time in fixing problems). :) Well, my 1.5GB machine hasn't had that problem, either. It must be you. :)
The upshot is that this is a genuine vulnerability that cannot be solved by throwing memory at the system.
Well, if you're gonna make this a serious response, how about by implementing per-process memory limits?
I'm not the one who signs every message with a cute slogan. Of course I'm serious.
I'm glad you think they're cute. :)
And I have considered using limits and I know of the ulimit built-in for BASH. But that's really neither here nor there, because only rarely are these programs started via a command submitted to a shell. To be genuinely helpful, I need something with a wider scope than a limit set in a shell.
Really? How do you log in? On my system, the console is a shell. SSH logins are a shell. Common X sessions are all launched from shell scripts (start-kde, gnome-session, .xinitrc, etc). Ergo, all of the processes referred to are launched from within a shell at some level. Make that shell script set your limits, and you're set. Hint - most of those scripts don't use the --noprofile option to sh. Theo's suggestion of pam_limits is probably easier to implement (and gives the opportunity to play with pam's real power, which is cool in and of itself), but sticking a ulimit line or two in /etc/profile will have a very similar effect. Sticking it in ~/.profile should work as well, since "most" of these sessions are run as the user. --Danny, who's sometimes serious even when one of these is present :)