Hi, I just wrote a shell script which looks like this: while true do $0 done I executed it as normal user and then the following happened: As you can imagine, very many shells were started (i wasnt able to count them because the system wasnt responding any more). And then the system started killing system processes like X and smbd. I got the following output on console 10: Apr 23 09:11:54 AlBundy kernel: VM: killing process kmail Apr 23 09:12:52 AlBundy kernel: VM: killing process smbd Apr 23 09:13:03 AlBundy kernel: VM: killing process smbd Apr 23 09:13:05 AlBundy kernel: VM: killing process xconsole Apr 23 09:13:13 AlBundy kernel: VM: killing process X The system recovered itself by killing X. That worked because i started the script from a shell in KDE. But if the script would be started within a telnet session, it could be more dangerous. I dont know if this is a security hole, but it might be. My system: SuSE 7.0 (kernel 2.2.18) Lots of updates and patches installed PII 350 MHz 320 MB RAM Peer-Christoph Mettelem BezRegMS (NRW, Germany) Software developer (trainee) PS.: This is my first mail to the mailing list. Sorry if its OT or something...
Let me guess. you did this as root. Oh my god, surprise surprise. Learn about imposing limits via PAM. (hint: www.sysadminmag.com). Kurt Seifried, seifried@securityportal.com Securityportal - your focal point for security on the 'net ----- Original Message ----- From: Peer-Christoph Mettelem To: suse-security Sent: Monday, April 23, 2001 1:48 AM Subject: [suse-security] Recursive Shellscript Hi, I just wrote a shell script which looks like this: while true do $0 done I executed it as normal user and then the following happened: As you can imagine, very many shells were started (i wasn't able to count them because the system wasn't responding any more). And then the system started killing system processes like X and smbd. I got the following output on console 10: Apr 23 09:11:54 AlBundy kernel: VM: killing process kmail Apr 23 09:12:52 AlBundy kernel: VM: killing process smbd Apr 23 09:13:03 AlBundy kernel: VM: killing process smbd Apr 23 09:13:05 AlBundy kernel: VM: killing process xconsole Apr 23 09:13:13 AlBundy kernel: VM: killing process X The system recovered itself by killing X. That worked because i started the script from a shell in KDE. But if the script would be started within a telnet session, it could be more dangerous. I don't know if this is a security hole, but it might be. My system: SuSE 7.0 (kernel 2.2.18) Lots of updates and patches installed PII 350 MHz 320 MB RAM Peer-Christoph Mettelem BezRegMS (NRW, Germany) Software developer (trainee) PS.: This is my first mail to the mailing list. Sorry if it's OT or something...
No, i was a normal user. Thats why i was so surprised. -----Urspr|ngliche Nachricht----- Von: Kurt Seifried [mailto:listuser@seifried.org] Gesendet: Montag, 23. April 2001 10:05 An: Peer-Christoph Mettelem; suse-security Betreff: Re: [suse-security] Recursive Shellscript Let me guess. you did this as root. Oh my god, surprise surprise. Learn about imposing limits via PAM. (hint: www.sysadminmag.com http://www.sysadminmag.com ). Kurt Seifried, seifried@securityportal.com mailto:seifried@securityportal.com Securityportal - your focal point for security on the 'net ----- Original Message ----- From: Peer-Christoph Mettelem mailto:Peer-Christoph.Mettelem@bezreg-muenster.nrw.de To: suse-security mailto:suse-security@suse.com Sent: Monday, April 23, 2001 1:48 AM Subject: [suse-security] Recursive Shellscript Hi, I just wrote a shell script which looks like this: while true do $0 done I executed it as normal user and then the following happened: As you can imagine, very many shells were started (i wasnt able to count them because the system wasnt responding any more). And then the system started killing system processes like X and smbd. I got the following output on console 10: Apr 23 09:11:54 AlBundy kernel: VM: killing process kmail Apr 23 09:12:52 AlBundy kernel: VM: killing process smbd Apr 23 09:13:03 AlBundy kernel: VM: killing process smbd Apr 23 09:13:05 AlBundy kernel: VM: killing process xconsole Apr 23 09:13:13 AlBundy kernel: VM: killing process X The system recovered itself by killing X. That worked because i started the script from a shell in KDE. But if the script would be started within a telnet session, it could be more dangerous. I dont know if this is a security hole, but it might be. My system: SuSE 7.0 (kernel 2.2.18) Lots of updates and patches installed PII 350 MHz 320 MB RAM Peer-Christoph Mettelem BezRegMS (NRW, Germany) Software developer (trainee) PS.: This is my first mail to the mailing list. Sorry if its OT or something...
On Mon, 23 Apr 2001, Peer-Christoph Mettelem wrote: hi, this is rather old. I posted similar program to Bugtraq years ago and the common thoughts by kernel developers were 'you can bring every unix to its knees when having shellaccess'. You can try to set limits, but there are a lot of resources which can be exhausted. To effectively prevent such 'attacks', use the "userdel" program which was wriiten for such purposes. Sebastian
Hi,
I just wrote a shell script which looks like this: while true do $0 done
I executed it as normal user and then the following happened: As you can imagine, very many shells were started (i wasnt able to count them because the system wasnt responding any more). And then the system started killing system processes like X and smbd. I got the following output on console 10: Apr 23 09:11:54 AlBundy kernel: VM: killing process kmail Apr 23 09:12:52 AlBundy kernel: VM: killing process smbd Apr 23 09:13:03 AlBundy kernel: VM: killing process smbd Apr 23 09:13:05 AlBundy kernel: VM: killing process xconsole Apr 23 09:13:13 AlBundy kernel: VM: killing process X
The system recovered itself by killing X. That worked because i started the script from a shell in KDE. But if the script would be started within a telnet session, it could be more dangerous.
I dont know if this is a security hole, but it might be.
My system: SuSE 7.0 (kernel 2.2.18) Lots of updates and patches installed PII 350 MHz 320 MB RAM
Peer-Christoph Mettelem BezRegMS (NRW, Germany) Software developer (trainee)
PS.: This is my first mail to the mailing list. Sorry if its OT or something...
-- ~ ~ perl self.pl ~ $_='print"\$_=\47$_\47;eval"';eval ~ krahmer@suse.de - SuSE Security Team ~
Yo,
To effectively prevent such 'attacks', use the "userdel" program which was wriiten for such purposes.
Yeah, Disconnect power is just as usefull.
From the man page:
CAVEATS userdel will not allow you to remove an account if the user is currently logged in. You must kill any running processes which belong to an account that you are delet- ing. I think this one is too easy and something should be done, specially if "this one is rather old". I've not heard one single argument why this couldn't and shouldn't be fixed. If the kernel is killing processes it might just as well try to locate the offending PID and kill that tree (childs included). I wouldn't care if the kernel temporarily halted all user processes for a few seconds while it sat down and thought about something effective. A procedure could be: 1) Detect resource depletion 2) prevent any user-process resource consumption 3) Count that resource for all pid's 4) Add the resource count of all childs to the parent (and again, all the way to the root) 5) Walk the parent -> child tree and look for one PID that suddenly has more than 50% of the resource 6) Kill that process tree 7) Resume normal operation Or something... CIAO, Peter
Hi!
To effectively prevent such 'attacks', use the "userdel" program which was wriiten for such purposes.
Yeah,
Disconnect power is just as usefull.
From the man page:
CAVEATS userdel will not allow you to remove an account if the user is currently logged in. You must kill any running processes which belong to an account that you are delet- ing.
use the script "slay" (usage: slay user) for that purpose. I attached it to this mail. Note that I did not write this script. Someone named Chris Ausbrooks did. Cheers, Yuri.
On Wed, 25 Apr 2001, Peter van den Heuvel wrote:
I think this one is too easy and something should be done, specially if "this one is rather old". I've not heard one single argument why this couldn't and shouldn't be fixed. If the kernel is killing processes it might just as well try to locate the offending PID and kill that tree (childs included). I wouldn't care if the kernel temporarily halted all user processes for a few seconds while it sat down and thought about something effective. A procedure could be: 1) Detect resource depletion 2) prevent any user-process resource consumption 3) Count that resource for all pid's 4) Add the resource count of all childs to the parent (and again, all the way to the root) 5) Walk the parent -> child tree and look for one PID that suddenly has more than 50% of the resource 6) Kill that process tree 7) Resume normal operation
Or something...
CIAO, Peter
This is untested, but my reading of the manpages leads me to believe that this trivial fork bomb would be stopped dead by a simple inclusion of `ulimit -Hu 100` in /etc/profile . -- Rick Green "I have the heart of a little child, and the brain of a genius. ... and I keep them in a jar under my bed"
On Wed, 25 Apr 2001, Rick Green wrote:
This is untested, but my reading of the manpages leads me to believe that this trivial fork bomb would be stopped dead by a simple inclusion of `ulimit -Hu 100` in /etc/profile .
À propos /etc/profile: is it *always* sure, that limits set in this file will be applied, or can a user avoid its execution by other means (changing defaultshell, at- or cron-job, ...) ? Peter -- Peter Münster http://notrix.net/pm-vcard
This is untested, but my reading of the manpages leads me to believe that this trivial fork bomb would be stopped dead by a simple inclusion of `ulimit -Hu 100` in /etc/profile .
À propos /etc/profile: is it *always* sure, that limits set in this file will be applied, or can a user avoid its execution by other means (changing defaultshell, at- or cron-job, ...) ? Peter
It is _not_ ensured.
For cron and at you'd need to run the respective daemons with the limits
already. They inherit their limit to their children.
Also, if you run a command like this:
ssh remotehost -l remoteuser do_nasty_things
then /etc/profile isn't read as well. So the whole thing is a bit tricky
to come by. As Sebastian pointed out already: userdel is your friend. If
that doesn't work with your business model, user very restrictive methods
to get rid of such people. Real BOFHs know some methods...
Thanks,
Roman.
--
- -
| Roman Drahtmüller
A propos /etc/profile: is it *always* sure, that limits set in this file will be applied, or can a user avoid its execution by other means (changing defaultshell, at- or cron-job, ...) ? Peter
It is _not_ ensured.
Also, if you run a command like this:
ssh remotehost -l remoteuser do_nasty_things
then /etc/profile isn't read as well.
What about /etc/security/limits.conf ?
Bjoern Engels
---------------------------------------------
E-Mail: Bjoern Engels
It is _not_ ensured.
Also, if you run a command like this:
ssh remotehost -l remoteuser do_nasty_things
then /etc/profile isn't read as well.
What about /etc/security/limits.conf ? Bjoern Engels
Exactly. This is why I pointed people at my article on sysadmin re PAM. Also I have updated the LASG a bit (finally!): http://www.securityportal.com/lasg/users/ Kurt Seifried, seifried@securityportal.com Securityportal - your focal point for security on the 'net
I have updated the LASG a bit (finally!): http://www.securityportal.com/lasg/users/
Is there a single-file download ? I'd like
to print that stuff.
Bjoern Engels
---------------------------------------------
E-Mail: Bjoern Engels
I have updated the LASG a bit (finally!): http://www.securityportal.com/lasg/users/
Is there a single-file download ? I'd like to print that stuff.
No. Why? I don't want people printing it. Why? Dead trees suck. Printed docs go out of date. The online version will ALWAYS be available at www.seifried.org/lasg/ so there is no worry about is going bye bye. If you wanna print you gotta do it the hard way =)
Bjoern Engels
Kurt Seifried, seifried@securityportal.com Securityportal - your focal point for security on the 'net
hello, isn't there a limit on the number of processes, size of memory that you set in pam.conf ? I actually limited it to around 5 or something to a user ( aka me !) and found i could not login :) regards omicron On Thu, 26 Apr 2001, Roman Drahtmueller wrote:
This is untested, but my reading of the manpages leads me to believe that this trivial fork bomb would be stopped dead by a simple inclusion of `ulimit -Hu 100` in /etc/profile .
� propos /etc/profile: is it *always* sure, that limits set in this file will be applied, or can a user avoid its execution by other means (changing defaultshell, at- or cron-job, ...) ? Peter
It is _not_ ensured.
For cron and at you'd need to run the respective daemons with the limits already. They inherit their limit to their children.
Also, if you run a command like this:
ssh remotehost -l remoteuser do_nasty_things
then /etc/profile isn't read as well. So the whole thing is a bit tricky to come by. As Sebastian pointed out already: userdel is your friend. If that doesn't work with your business model, user very restrictive methods to get rid of such people. Real BOFHs know some methods...
Thanks, Roman.
-- ****** omicron Mail:omicron@omicron.dyndns.org (Sridhar N) www:omicron.symonds.net pubkeys:omicron.symonds.net/pubkeys C O G I T O E R G O S U M ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hello, isn't there a limit on the number of processes, size of memory that you set in pam.conf ? I actually limited it to around 5 or something to a user ( aka me !) and found i could not login :)
regards omicron
Sense and reason is the limit. 5 might not be enough to run
through /etc/profile.
If you configure such limits, always make sure that root is not concerned
by these limits. Otherwise you have to reboot to get rid of the
constraints since the limits get inherited.
Roman.
--
- -
| Roman Drahtmüller
On Fri, 27 Apr 2001, Roman Drahtmueller wrote:
Sense and reason is the limit. 5 might not be enough to run through /etc/profile. Yeah.. i learnt it...and i wont forget it. But then no amount of docs would have made me learn it. I only know one way of learning -- SCREW UP !
If you configure such limits, always make sure that root is not concerned by these limits. Otherwise you have to reboot to get rid of the constraints since the limits get inherited. i'm not on a production machine, i use my home PC. As long as i remember the lilopasswd, i can get it out of most of these situations.. well, i confess, i *do* play as root. Screwed up the fs a couple of times, reinstalled and played again :-) What do you think of snort, iptables, lids , sudo on a single comp ? Either i'm mad or i'm paranoid or i don't know any of them. All of them are partly right. No other way of learning for me.
cheers omicron -- ****** omicron Mail:omicron@omicron.dyndns.org (Sridhar N) www:omicron.symonds.net pubkeys:omicron.symonds.net/pubkeys C O G I T O E R G O S U M ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Yo,
isn't there a limit on the number of processes, size of memory that you set in pam.conf ? I actually limited it to around 5 or something to a user ( aka me !) and found i could not login :)
Sense and reason is the limit. 5 might not be enough to run through /etc/profile.
If you configure such limits, always make sure that root is not concerned by these limits. Otherwise you have to reboot to get rid of the constraints since the limits get inherited.
A little something that might help dealing with situations like these... I'm doing admin for mostly remote machines (cipe tunnels, very restrictive firewall rules) and playing with config items like under discussion always makes me very nervous. Recovering from a lock-myself-out situation might easily force me to fly to Canada (living in the Netherlands). So, I started using a little procedure: copy config to /tmp, edit, run, move back if OK. If get locked out I can always call for someone to play the power switch. That procedure got a little more formalized and I added a "sleep", followed by a "reboot -r" to the end of the file. If everything was OK, I would be able to hit ^C, remove the sleep-reboot and move the file back. Because I ones forgot to remove these (luckily at home). I made a little improvement and now the end of my /etc/rc.d/rc.firewall (one of the most often changed config items) reads: # This is completely free software to anyone, except for security agencies who must first # mail me the phone number where they can be reached between 22:00 and 08:00 hours. if [ `basename $0` != "rc.firewall" ] then echo "Script started as $0." echo "You have one minute to show you are still effectively connected by hitting ^C." echo "If you do not kill this script within one minute the machine will reboot." for Loop in 60 50 40 30 20 10 do echo echo -n "Hit ^C now! Reboot in ${Loop} seconds!" sleep 10 done echo echo echo "Rebooting machine..." shutdown -r 0 fi I always copy this code and paste the config's filename so there's minimal chance at mistakes. Now I can safely copy to test.firewall, add "iptables -i ${MyOnlyWayIn} -j DROP" to the top of the INPUT chain, run it, and then (some two minutes later) decide against it and toss the added security. CIAO, Peter
* Peter van den Heuvel wrote on Sat, Apr 28, 2001 at 10:07 +0200:
echo "You have one minute to show you are still effectively connected by hitting ^C." echo "If you do not kill this script within one minute the machine will reboot."
I don't think that this is reliable. If the shell dies by some reason, which might be forced by SSH (sometimes bad Networkpacket may close an SSH connection), the script gets interrupted. To schedule a reboot, you may use "shutdown -r +1m". After restarting firewall, use "shutdown -c". This may cause confusion on non-dedicated firewalls where users are logged in. Aother thing is to improve the shell script a little. I played with this idea quite a while, but I found not time to implement it. The idea: The first action of the firewall script is to launch a subprocess. This should run "nohup" in an own session like: ( nohup setsid $WATCHER_SCRIPT 2>&1 | logger $LOGPARA ) & After that, the firewall script starts. On error, the firewall script dies which need to be deteted by WATCHER_SCRIPT. In this case the WATCHER inserts an "allow-SSH" rule with highest precedence into ipchains (ipchains -I $SSH_RULE) or similar. On success, the firewall script ends, too. The WATCHER needs to detect this, too. In this case, an SSH access test may be performed (ipchains -C $SSH_SOURCE -D any:22 -i ...). If that fails, a SSH_RULE become inserted. A better way is to automatizise the firewall restart more completly by remote, like: set +e ssh $HOST $FIREWALL_START ssh $HOST $FIREWALL_WATCHERKILL In this case the firewall lauches a WATCHER_SCRIPT like in the first example. The WATCHER waits 2 minutes and insertes a SSH_RULE under all cirumstances (nearly :)). The second script (or even just a command line) can get excuted only if the firewall_start succeeded and a second SSH connect works (still). The WATCHER_KILL sends a signal to the WATCHER, i.e. killall -INT $WACTHER_SCRIPT The WATCHER_SCRIPT catches this signal. The it looks for running instances of FIREWALL_START to avoid race conditions. If FIREWALL_START timed out (maybe 1 minute), it gets a TERM and 10 secs later a KILL. In this case an SSH_RULE gets inserted and an error gets reportet in some way. If WATCHER_SCRIPT sees that there are no running instances of FIREWALL_SCRIPT (normal case), it has nothing to do: SSH is working, otherwise it hadn't even started. WATCHER_SCRIPT does exit 0. I would like to hear some comments about this. I think this is not difficult to implement, seems to be reliable even on breaking network conenctions (like "network unreachble" which might stop a script which waits for CTRL-C or so). Did I miss some conditions? Is anyone interested to do such a thing together with me (as open source)? oki, Steffen -- Dieses Schreiben wurde maschinell erstellt, es trägt daher weder Unterschrift noch Siegel.
* Steffen Dettmer wrote on Sat, Apr 28, 2001 at 16:44 +0200:
The idea:
ssh $HOST $FIREWALL_START ssh $HOST $FIREWALL_WATCHERKILL
I implemented such a thing. It seems to be reliable and usable.
First tests looked good. To share the results here some note and
details (some parts [...] cut):
1. launching the watcher, in bash shell code:
function launch_watcher()
{
#make sure we are the onliest instance:
[...]
#launch watcher ($0 watcher)
{
export caller=$0;
export WATCH_ME=$$;
export TIMEOUT;
nohup setsid $0 watcher 2>&1 | $LOGGER &
last=$!
if [ "$PIPESTATUS" != "0" ] ; then
echo "error launching watcher process!";
echo "error launching watcher process!" | $LOGGER;
exit 1;
fi
} &
sleep 2;
#now we need to check if the watcher is still running (it
# exits immediatly on error
ps ax | egrep "\? .* $0" | egrep -v "(grep|$$)" > /dev/null
if [ "$?" != "0" ] ; then
echo "Watcher already DIED!"
echo "check syslog!"
test -r /var/log/messages && tail /var/log/messages
exit 1;
else
echo "Watcher session is running."
fi
}
2. The watcher itself:
function watcher()
{
function signal_handler()
{
echo "Watcher: Signal \"OK\" caught --> Exiting."
exit 0;
}
#called correctly?
[...]
trap "signal_handler" SIGUSR1
#give firewall some time to set up rules
[...]
#check if firewall is still running and kill in this case
if kill -0 $WATCH_ME 2>/dev/null ; then
echo "Watcher: $WATCH_ME alive...TIMED OUT. KILLING IT NOW."
kill -TERM $WATCH_ME 2>/dev/null
[...]
sleep 1
force_ssh_open_direct;
echo "Watcher: exiting."
exit 1
fi
echo "Watcher: waiting for OK..."
#give admin some time to call "firewall ok" which kill
# this process. After that time we open SSH
for (( n=0 ; n
* Steffen Dettmer wrote on Sat, Apr 28, 2001 at 21:01 +0200:
* Steffen Dettmer wrote on Sat, Apr 28, 2001 at 16:44 +0200:
The idea:
ssh $HOST $FIREWALL_START ssh $HOST $FIREWALL_WATCHERKILL
I implemented such a thing. It seems to be reliable and usable. First tests looked good. To share the results here some note and details (some parts [...] cut):
Some recommendations:
- first, if the same script is used to start firewalling at
bootup via rcinit (rc2.d/S04firewall or whatever), it's not a
good idea to launch the watcher.
- second, I found an easy improvement: if the firewall
"start" is finished, another signal is sent to the watcher. If
the watcher detects that the script is finished (or died)
before receiving this signal "DONE", it opens SSH immediatly
(SIGUSR1 is used to signal "DONE", SIGUSR2 is used to signal "OK".
It's surprising what modern scripting is able to do. Anyway, it would
be more nice to use perl: faster and more robust and so on...)
updated code excerpts follow:
1. check if called by rcinit (I know it's trivial, but requires some
testing, so here my version:)
BASENAME=${0##*/} #longest match ARGV[0]
RCLINK=${BASENAME%%[SK][0-9][0-9]*} #longest Match "S04*" from end
#RCLINK is empty if this matched
if [ -z "$RCLINK" ] ; then
RCLINK="yes" #called as i.e. S04firewall via init
else
RCLINK="no" #called by other name
fi
2. modified launch_watcher, following condition check inserted:
function launch_watcher()
{
[...]
#check if called via rc.d/[SK]xxfirewall (by init)
# in this case no watcher is needed
if [ "$RCLINK" = "yes" ] ; then
echo "[called by init --> no watcher session needed]"
return
fi
[...old code...]
}
3. new function watcher_done (which is like kill_watcher, but sends
SIGUSR1 and will not kill the watcher process with TERM
function watcher_done()
{
[...get PIDs via ps|awk or whatever...]
#send USR1 ("DONE")
for PID in $PIDS ; do
kill -USR1 $PID
done;
}
4. kill_watcher renamed to watcher_ok; modified:
function watcher_ok()
{
#make sure we are the onliest instance except watchers:
[...]
#serach for other instances (watchers)
[...ps|awk or whatever...]
#send USR2 ("OK")
for PID in $PIDS ; do
kill -USR2 $PID
done;
[...ps|awk or kill -0 PID or whatever...]
#send KILL if still alive
[...]
}
5. The improved watcher:
function watcher()
{
DONE="no"
function signal_handler_USR1()
{
echo "Watcher: Signal \"DONE\" caught."
DONE="yes" #remeber this state
}
function signal_handler_USR2()
{
echo "Watcher: Signal \"OK\" caught --> Exiting."
#maybe we got "OK" before "DONE"
if [ "$DONE" != "yes" ] ; then
echo "Warning, we are not DONE!"
fi
exit 0;
}
[...]
trap "signal_handler_USR1" SIGUSR1
trap "signal_handler_USR2" SIGUSR2
[...]
#give firewall some time to set up rules
for (( n=0 ; n
On Wed, 25 Apr 2001, Peter van den Heuvel wrote:
Yo,
To effectively prevent such 'attacks', use the "userdel" program which was wriiten for such purposes.
Yeah,
Disconnect power is just as usefull. No, it would require physical access :) I still prefer userdel.
From the man page:
CAVEATS userdel will not allow you to remove an account if the user is currently logged in. You must kill any running processes which belong to an account that you are delet- ing.
I think this one is too easy and something should be done, specially if "this one is rather old". I've not heard one single argument why this
Yes, although its not easy. This issue comes up one or two times each year. Usually you give trusted ppl access to your machine. If they start fork-bombing or alike its reason enough to remove them. I don't know if Linux's tracking system of who is using how much of what suffices now or should be extended. Just decreasing or disallowing things isnt a solution, because even some security breaches come up due to 'just decrease' behavior (as seen with old capability bug for example). bye, Sebastian -- ~ ~ perl self.pl ~ $_='print"\$_=\47$_\47;eval"';eval ~ krahmer@suse.de - SuSE Security Team ~
Dear Peer-Christoph, Compaq's Tru64 Unix guards itself against this by setting a limit for the number of processes that can be run simultaneously under a given uid. This limit (max-proc-per-user) is a kernel run-time configuration option (root is exempt). I'm sure Convex-OS had something similar when I used it back in the early 90s; it is something I would expect in a mature operating system. It is true that you can never completely guard against problems of this nature, but I believe one should at least try. This particular problem is nasty because (a) it happens very quickly (b) it is easy to do by mistake I administer a system used by 100s of undergraduates and will be moving from Tru64 to Linux in the summer; I think it would be a very good thing if Linux protected itself against this kind of thing. My users don't deliberately try to kill the system (they would be lynched if they did) but they certainly make mistakes. Bob On Mon, 23 Apr 2001, Peer-Christoph Mettelem wrote:
Hi,
I just wrote a shell script which looks like this: while true do $0 done
I executed it as normal user and then the following happened: As you can imagine, very many shells were started (i wasnt able to count them because the system wasnt responding any more). And then the system started killing system processes like X and smbd. I got the following output on console 10: Apr 23 09:11:54 AlBundy kernel: VM: killing process kmail Apr 23 09:12:52 AlBundy kernel: VM: killing process smbd Apr 23 09:13:03 AlBundy kernel: VM: killing process smbd Apr 23 09:13:05 AlBundy kernel: VM: killing process xconsole Apr 23 09:13:13 AlBundy kernel: VM: killing process X
The system recovered itself by killing X. That worked because i started the script from a shell in KDE. But if the script would be started within a telnet session, it could be more dangerous.
I dont know if this is a security hole, but it might be.
My system: SuSE 7.0 (kernel 2.2.18) Lots of updates and patches installed PII 350 MHz 320 MB RAM
Peer-Christoph Mettelem BezRegMS (NRW, Germany) Software developer (trainee)
PS.: This is my first mail to the mailing list. Sorry if its OT or something...
============================================================== Bob Vickers R.Vickers@cs.rhul.ac.uk Dept of Computer Science, Royal Holloway, University of London WWW: http://www.cs.rhul.ac.uk/home/bobv Phone: +44 1784 443691
participants (12)
-
Bjoern Engels
-
Bob Vickers
-
Kurt Seifried
-
omicron
-
Peer-Christoph Mettelem
-
Peter Münster
-
Peter van den Heuvel
-
Rick Green
-
Roman Drahtmueller
-
Sebastian Krahmer
-
Steffen Dettmer
-
Yuri Robbers