maximum nproc value
What is maximum nproc value built into linux kernel? I want to configure nproc in /etc/security/limits.conf as 20% of max. What is the maximum allowable value? -- Warm regards, Michael Green
On 2/2/06, Michael Green <mishagreen@gmail.com> wrote:
What is maximum nproc value built into linux kernel?
I want to configure nproc in /etc/security/limits.conf as 20% of max. What is the maximum allowable value?
Michael, Technically, Linux (since 2.6) can handle up to around 1 billion PIDs, but since you're asking for per a user setting use the following lines of code: #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <unistd.h> int main(int argc, char **argv) { long value; errno = 0; if ((value = sysconf(_SC_CHILD_MAX)) < 0) { if (errno != 0) { perror("sysconf error"); exit(1); } fputs("_SC_CHILD_MAX not defined.", stdout); } else printf("CHILD_MAX = %ld\n", value); return (0); } Compile with % cc nproc.c -o nproc and invoke with ./nproc. I have no access to a Linux box right now, but it should work. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
On 2/2/06, Steve Graegert <graegerts@gmail.com> wrote: Wow, such a quick reply... gene1:~ # vi nproc.c gene1:~ # cc nproc.c -o nproc gene1:~ # ./nproc CHILD_MAX = 999 that means that a user cannot fork more than 999 processes? Now suppose I run a fork bomb, the machine will die right away. Why? Cannot modern dual cpu computer handle a little more than 999 processes? -- Warm regards, Michael Green
On 2/2/06, Michael Green <mishagreen@gmail.com> wrote:
On 2/2/06, Steve Graegert <graegerts@gmail.com> wrote:
Wow, such a quick reply...
gene1:~ # vi nproc.c gene1:~ # cc nproc.c -o nproc gene1:~ # ./nproc CHILD_MAX = 999
that means that a user cannot fork more than 999 processes?
Now suppose I run a fork bomb, the machine will die right away. Why? Cannot modern dual cpu computer handle a little more than 999 processes?
That's quite a lot. A user should never, never need so many processes. A mondern UNIX (and Linux) system can handle far more than 999 processes quite reliably. But consider a multi-user system with, say, 50+ users, everyone of them creating around 1000 processes. It's a nightmare. A fork bomb eats up all the process table entries and, due to its recursive calls to fork(), creates a huge amount of overhead that can bring systems down. It's actually not the number of processes that are difficult to handle but a fork bomb creates children of its children of its children... which means that memory is eaten up and even unwinding the stack afterwards to clean up is difficult. Limits like the one you asked for help to prevent fork bombs. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
On Thursday 02 February 2006 10:28, Steve Graegert wrote:
Limits like the one you asked for help to prevent fork bombs.
\Steve
Could you be a little more specific, Steve? (Just kidding!) :-) How common are these "fork bombs" and is this type of threat one that a typical SUSE desktop user should worry about? How transportable is your solution, Steve? Can it be implemented the same way in recent SUSE releases? (i.e. 9.2, 9.3 and 10.0?) And thanks to both of you for raising and answering an extremely interesting thread! regards, - Carl
On 2/2/06, Carl Hartung <suselinux@cehartung.com> wrote:
On Thursday 02 February 2006 10:28, Steve Graegert wrote:
Limits like the one you asked for help to prevent fork bombs.
\Steve
Could you be a little more specific, Steve? (Just kidding!) :-)
How common are these "fork bombs" and is this type of threat one that a typical SUSE desktop user should worry about? How transportable is your solution, Steve? Can it be implemented the same way in recent SUSE releases? (i.e. 9.2, 9.3 and 10.0?)
We have to distuingish between two settings that directly affect the maximum number of processes a user can create. First of all there is the shell. Calling 'ulimit -m' in my bash returns 6143 which means bash won't prevent users from creating 6143 processes. And there is a system imposed limit (sysconf). The latter can be overridden. I just ran a fork bomb on my system and I've not been able to lock it up (it ran for about 10 minutes). The number of processes did not exceed 6077. SuSE does not set any limits by default, at least not on my 9.3 (2.6.13.x). A fork bomb is usually of no concern to admins, since they do very little harm and most modern systems, notably Tru64 Unix, Solaris and others (I suppose Linux among them) are immune to these kinds of attacks. If a fork bomb locks these system it is very likely that these are the result of bugs in a kernel subsystem, e.g. memory management. In 2005 Fedora systems were affected by such a bug and Debian, for example, has not yet shown weaknesses related to local DoS attacks. Sometimes you can read articles about someone having written a program or script that brought a system down and blamed developers for bad coding and the like. Most of the times, they are not even able to name the resource that has been exhausted. A fork bomb like this: #include <stdio.h> int main(void) { while (1) fork(); return (0); } will do not more than slowing down a system but it will not lock it. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Steve Graegert wrote:
A fork bomb like this:
#include <stdio.h>
int main(void) { while (1) fork(); return (0); }
will do not more than slowing down a system but it will not lock it.
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
A fork bomb like this:
#include <stdio.h>
int main(void) { while (1) fork(); return (0); }
will do not more than slowing down a system but it will not lock it.
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :- \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting) This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes? /Per Jessen, Zürich
On 2/3/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting) This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes?
There are two settings which affect the maximum number of procs per real user id: 1. /etc/security/limits.conf tells the kernel what and how much resources a user/group can use on a particular system. It can be seen as a quota. 2. The shells ulimit (man bash [on my 9.3]) determines the upper limit for certain resources like memory, processes and the like. Some shells do not support for configuration of all resources (like sh). 'ulimit -u' tells gives 6143 on my system, thus allowing 6143 processes per user. When I disable any limits on this resource in limits.conf (meaning unlimited) I am able to create this number of procs (see my previous posts). I usually limit this resource to 64 for regular users and bash was not able to create more than 64 processes. ulimit does not override limits.conf The sysconf(_SC_CHILD_MAX) thing is a POSIX limit that can be overridden by the kernel and application writer (it's intended for developers to encourage them to write portable code). As written earlier, from the mathematical POV, systems support billions of processes, which does not mean that they are able to handle them. If someone wants to find a reasonable max value for the number of procs per real user id, one should query sysconf and use the returned value as the maximum setting. 'Maximum' is by no means optimal, it's just an upper limit. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Steve Graegert wrote:
1. /etc/security/limits.conf tells the kernel what and how much resources a user/group can use on a particular system. It can be seen as a quota
I have a couple of questions on this. The /etc/security/limits.conf file and the ulimit seem to only limit the amount of processes per user. Can you also limit the amount of processes that this system itself is allowed to concurrently run? I realize that in most cases this would cause undesirable effects, but I am thinking about this from a security standpoint. Say you do a benchmark a web server and determine the maximum amount of processes needed. You could then impose a limit to help prevent remote code execution or buffer overflow exploits, because new processes would not be allowed to start. (OT, or is there a way to create a white list of allowed processes?). Also, are changes to the limits.conf file immediate, or does a service need to be restarted for any changes to take effect? You could create a script that oversees requests for processes, check the request against a white list, then update the limits.conf file to allow an additional process. Is this a good idea, or is my logic flawed? - James W.
On 2/3/06, James Wright <jwright01@tds.net> wrote:
Steve Graegert wrote:
1. /etc/security/limits.conf tells the kernel what and how much resources a user/group can use on a particular system. It can be seen as a quota
I have a couple of questions on this. The /etc/security/limits.conf file and the ulimit seem to only limit the amount of processes per user. Can you also limit the amount of processes that this system itself is allowed to concurrently run?
No, I don't think so. /etc/security/limits.conf allows per user setting, only. It is possible to limit the number of processes for all users. Simply add * U 128 to limits.conf. It won't allow any user to create more than 256 processes.
(OT, or is there a way to create a white list of allowed processes?).
No.
Also, are changes to the limits.conf file immediate, or does a service need to be restarted for any changes to take effect?
Yes, a reboot is required.
You could create a script that oversees requests for processes, check the request against a white list, then update the limits.conf file to allow an additional process. Is this a good idea, or is my logic flawed?
It won't work since limits.conf is only read once. See previous answer. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
On Friday, February 03, 2006 @ 4:11 AM, Steve Graegert wrote:
On 2/3/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting) This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes?
There are two settings which affect the maximum number of procs per real user id:
1. /etc/security/limits.conf tells the kernel what and how much resources a user/group can use on a particular system. It can be seen as a quota.
2. The shells ulimit (man bash [on my 9.3]) determines the upper limit for certain resources like memory, processes and the like. Some shells do not support for configuration of all resources (like sh). 'ulimit -u' tells gives 6143 on my system, thus allowing 6143 processes per user. When I disable any limits on this resource in limits.conf (meaning unlimited) I am able to create this number of procs (see my previous posts). I usually limit this resource to 64 for regular users and bash was not able to create more than 64 processes. ulimit does not override limits.conf
The sysconf(_SC_CHILD_MAX) thing is a POSIX limit that can be overridden by the kernel and application writer (it's intended for developers to encourage them to write portable code). As written earlier, from the mathematical POV, systems support billions of processes, which does not mean that they are able to handle them. If someone wants to find a reasonable max value for the number of procs per real user id, one should query sysconf and use the returned value as the maximum setting. 'Maximum' is by no means optimal, it's just an upper limit.
\Steve
Wow! I had wanted to increase the max number of files and max processes on my 2 user accounts in the past and, from reading on this list, had gotten the impression that that had to be done by changing kernel parameters and recompiling it. This was back under 8.1 Pro. I needed certain processes to start with higher limits than the defaults, so I ended up with a work-around where I actually used a su -C to do a quick switch to root to start those processes there, where I could up the limits. I just tested setting those limits in limits.conf and it appears to have worked like a champ! Has this capability only come along with more recent versions of SuSE, or was it around back in 8.1 and I just didn't dig it out? Thanks, Greg Wallace
On 2/3/06, Greg Wallace <gregwallace@fastmail.fm> wrote:
On Friday, February 03, 2006 @ 4:11 AM, Steve Graegert wrote:
On 2/3/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting) This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes?
There are two settings which affect the maximum number of procs per real user id:
1. /etc/security/limits.conf tells the kernel what and how much resources a user/group can use on a particular system. It can be seen as a quota.
2. The shells ulimit (man bash [on my 9.3]) determines the upper limit for certain resources like memory, processes and the like. Some shells do not support for configuration of all resources (like sh). 'ulimit -u' tells gives 6143 on my system, thus allowing 6143 processes per user. When I disable any limits on this resource in limits.conf (meaning unlimited) I am able to create this number of procs (see my previous posts). I usually limit this resource to 64 for regular users and bash was not able to create more than 64 processes. ulimit does not override limits.conf
The sysconf(_SC_CHILD_MAX) thing is a POSIX limit that can be overridden by the kernel and application writer (it's intended for developers to encourage them to write portable code). As written earlier, from the mathematical POV, systems support billions of processes, which does not mean that they are able to handle them. If someone wants to find a reasonable max value for the number of procs per real user id, one should query sysconf and use the returned value as the maximum setting. 'Maximum' is by no means optimal, it's just an upper limit.
\Steve
Wow! I had wanted to increase the max number of files and max processes on my 2 user accounts in the past and, from reading on this list, had gotten the impression that that had to be done by changing kernel parameters and recompiling it. This was back under 8.1 Pro. I needed certain processes to start with higher limits than the defaults, so I ended up with a work-around where I actually used a su -C to do a quick switch to root to start those processes there, where I could up the limits. I just tested setting those limits in limits.conf and it appears to have worked like a champ! Has this capability only come along with more recent versions of SuSE, or was it around back in 8.1 and I just didn't dig it out?
This functionality is relatively old (first seen in 2002 bundled in a PAM package). I don't know what's its origin. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
On Fri, 03 Feb 2006 10:47:35 +0100 Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the > computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting) This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes?
Hard to know from this distance, but there are hard limits and soft limits (though these can be the same), but the soft limit can't be above the hard limit. Try "ulimit -u", "ulimit -Su" and "ulimit -Hu". ken -- "This world ain't big enough for the both of us," said the big noema to the little noema.
On 2/3/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Sorry, to hear that. I've used this code for years in trainings on a couple of platforms. Never tried that on an Intel box running Linux prior to 2.6. Can hardly believe that 2.4 can be compromised that easily :-
What's interesting is - it reported CHILD_MAX = 999, yet your bit of code was allowed to start 7000+ processes? (see my other posting)
As stated earlier, the return value of _SC_CHILD_MAX is a recommendation for software developers, since there is no portable way to obtain the maximum number of processes per real user id. It is a reasonable maximum for applications running on Linux (or other POSIX-compliant) systems and not a hard limit or something.
This is not an area I've ever looked into - do I need to enable something or other in order to have a cap on the number of processes?
Yes, edit /etc/security/limits.conf and add the following line: # <domain> <limittype> <item> <value> <username> U hard 128 with username as a group, username or wildcard. This will prevent <username> to create more than 128 processes. Be aware that some environments (KDE, Gnome) can exceed this limit quite easily. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Steve Graegert wrote:
with username as a group, username or wildcard. This will prevent <username> to create more than 128 processes. Be aware that some environments (KDE, Gnome) can exceed this limit quite easily.
Thanks for the update - I'll have to look into that. For workstations I wouldn't bother with a process-limit, but it might be a safe precaution on some servers. /Per Jessen, Zürich -- http://www.spamchek.com/ - managed anti-spam and anti-virus solution. Let us analyse your spam- and virus-threat - up to 2 months for free.
Per Jessen wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Well, the local console was still responding, but after entering a single keystroke, I had to wait for about 5 mins before the next was accepted. I ended up rebooting that machine. It was not locked up as such, merely unusable and the fastest recovery was to reboot. It took about 1, maybe 2mins for it to accept the Ctrl-Alt-Del combination. Of course, this was still on 2.4 - it's quite possible 2.6 with proper pre-emption would have behaved much better. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Per Jessen wrote:
I just ran the above via a remote konsole, and my 2-way machine certainly _appears_ to be locked up. I'll have to make my way to the computer-room to check out the local console.
Well, the local console was still responding, but after entering a single keystroke, I had to wait for about 5 mins before the next was accepted. I ended up rebooting that machine. It was not locked up as such, merely unusable and the fastest recovery was to reboot. It took about 1, maybe 2mins for it to accept the Ctrl-Alt-Del combination. Of course, this was still on 2.4 - it's quite possible 2.6 with proper pre-emption would have behaved much better.
Yes, true. I ran the code I've posted an hour ago and it didn't do any harm. I suppose the new O(1) scheduler has proven to scale well. Nevertheless, the fork bomb is as old as the system call itself and should never cause a system to lock up. Solaris 2.6+ and Tru64 Unix 4.1+ have never shown any anomalies. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Per Jessen wrote:
Well, the local console was still responding, but after entering a single keystroke, I had to wait for about 5 mins before the next was accepted. I ended up rebooting that machine. It was not locked up as such, merely unusable and the fastest recovery was to reboot. It took about 1, maybe 2mins for it to accept the Ctrl-Alt-Del combination.
Addendum: I've just now managed to reboot that box. I noticed 7206 active processes, and whilst the box may not have been _actually_ locked up, apart from allowing me to change virtual console once, it was completely unresponsive. /Per Jessen, Zürich
Michael Green wrote:
Now suppose I run a fork bomb, the machine will die right away. Why? Cannot modern dual cpu computer handle a little more than 999 processes?
Depends on the config. I have a 4-way IBM Netfinity with 2 Gb RAM - this will do at least 700 processes whilst compiling the Linux kernel. 999 processes seem like a lot for a 2-way machine - depending on what those processes do. Does your machine really die right away? Have you got enough swap-space and memory? Mind you, even when an IBM 3090-400J was a very big iron (1989/90), rumour had that 400 TSO users hitting Enter at the same time would bring it to its knees. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Depends on the config. I have a 4-way IBM Netfinity with 2 Gb RAM - this will do at least 700 processes whilst compiling the Linux kernel.
We are talking about concurrent processes only. I cannot believe compiling the kernel requires ~700 processes running in parallel.
999 processes seem like a lot for a 2-way machine - depending on what those processes do. Does your machine really die right away?
Yes it does, I'm sure yours will die too unless you have nproc cap either at the level of shell or PAM. -- Warm regards, Michael Green
On 2/2/06, Michael Green <mishagreen@gmail.com> wrote:
On 2/2/06, Per Jessen <per@computer.org> wrote:
Depends on the config. I have a 4-way IBM Netfinity with 2 Gb RAM - this will do at least 700 processes whilst compiling the Linux kernel.
We are talking about concurrent processes only. I cannot believe compiling the kernel requires ~700 processes running in parallel.
Indeed, it only needs a couple of threads, but not multiple processes, at least not simultaneously.
999 processes seem like a lot for a 2-way machine - depending on what those processes do. Does your machine really die right away?
Yes it does, I'm sure yours will die too unless you have nproc cap either at the level of shell or PAM.
What's your kernel version, shell you're using, current ulimit settings. And, do you have the code of the fork bomb you ran? I'd really like to take a look at it. Thanks. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Steve Graegert wrote:
We are talking about concurrent processes only. I cannot believe compiling the kernel requires ~700 processes running in parallel.
Indeed, it only needs a couple of threads, but not multiple processes, at least not simultaneously.
In kernel 2.4 a thread is a process. And if you compile the kernel with 'make -j bzImage', it's not about what make _needs_, but about how much work it can load the box with. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Steve Graegert wrote:
We are talking about concurrent processes only. I cannot believe compiling the kernel requires ~700 processes running in parallel.
Indeed, it only needs a couple of threads, but not multiple processes, at least not simultaneously.
In kernel 2.4 a thread is a process. And if you compile the kernel with 'make -j bzImage', it's not about what make _needs_, but about how much work it can load the box with.
It was a major design flaw, which has been corrected with 2.6. Mapping each thread on a process just reduces complexity in the kernel. In 2.6 threads are scheduled onto LWPs in an n:m model. BTW: are we still talking about the bomb DoS thing? \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
On 2/2/06, Steve Graegert <graegerts@gmail.com> wrote:
What's your kernel version, shell you're using, current ulimit settings. And, do you have the code of the fork bomb you ran? I'd really like to take a look at it. Thanks.
Mine is SLES9 SP1 (sorry for not posting this into SLES list), kernel 2.6.5-7.139-smp, hardware is IBM eSeries blade (HS20) equipped with dual Xeon 2.8, no Hyperthreading, 2G RAM + 4G swap. I've just disabled job dispatching to one of the blades and will try forkbombing it as soon as users' jobs deplete from there. This may take awhile though. -- Warm regards, Michael Green
On 2/2/06, Steve Graegert <graegerts@gmail.com> wrote:
Yes it does, I'm sure yours will die too unless you have nproc cap either at the level of shell or PAM.
What's your kernel version, shell you're using, current ulimit settings. And, do you have the code of the fork bomb you ran? I'd really like to take a look at it. Thanks.
Just tested again fork bombing the machine I've described (above) by running this simple bash shell forkbomb g8:~ # :(){ :|:& };: and within a couple of seconds it immediately came to a grinding halt. No limits were configured. -- Warm regards, Michael Green
On 2/5/06, Michael Green <mishagreen@gmail.com> wrote:
On 2/2/06, Steve Graegert <graegerts@gmail.com> wrote:
Yes it does, I'm sure yours will die too unless you have nproc cap either at the level of shell or PAM.
What's your kernel version, shell you're using, current ulimit settings. And, do you have the code of the fork bomb you ran? I'd really like to take a look at it. Thanks.
Just tested again fork bombing the machine I've described (above) by running this simple bash shell forkbomb g8:~ # :(){ :|:& };: and within a couple of seconds it immediately came to a grinding halt. No limits were configured.
If now limits are configured, it's no surprise that the system locks up. I can't remember if SuSE default installations have no limits configured. Most other systems I worked with have quite strict limits imposed by default. The kernel does not prevent users from creating as many processes as they think they will need. There is no built-in limit, although the number of processes is limited to 2^16 or 2^32 (the numbers are confusing; different sources, different numbers). \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Michael Green wrote:
On 2/2/06, Per Jessen <per@computer.org> wrote:
Depends on the config. I have a 4-way IBM Netfinity with 2 Gb RAM - this will do at least 700 processes whilst compiling the Linux kernel.
We are talking about concurrent processes only.
Of course.
I cannot believe compiling the kernel requires ~700 processes running in parallel.
Hmm, I'm not sure now, maybe it was compiling gcc. I know I saw the number showing at least 750.
999 processes seem like a lot for a 2-way machine - depending on what those processes do. Does your machine really die right away?
Yes it does, I'm sure yours will die too unless you have nproc cap either at the level of shell or PAM.
What does your processes do? I'm tempted to try this out just see what happens. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Michael Green wrote:
On 2/2/06, Per Jessen <per@computer.org> wrote:
Depends on the config. I have a 4-way IBM Netfinity with 2 Gb RAM - this will do at least 700 processes whilst compiling the Linux kernel.
We are talking about concurrent processes only.
Of course.
I cannot believe compiling the kernel requires ~700 processes running in parallel.
Hmm, I'm not sure now, maybe it was compiling gcc. I know I saw the number showing at least 750.
When I compile my kernels it does not use more than about 8 processes, +- 2, neither with Linux, not other systems. \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
Michael Green wrote:
I cannot believe compiling the kernel requires ~700 processes running in parallel.
Compiling the kernel _requires_ just a single process, but if you build the kernel to stress a machine, one way is to build it with 'make -j' I've just compiled 2.4.32 with 'make -j bzImage' on a dual PIII 500MHz machine with 1.5Gb RAM - the number of processes shown in 'top' crept slowly towards 700, the highest number I saw was 767. /Per Jessen, Zürich
On 2/2/06, Per Jessen <per@computer.org> wrote:
Michael Green wrote:
I cannot believe compiling the kernel requires ~700 processes running in parallel.
Compiling the kernel _requires_ just a single process, but if you build the kernel to stress a machine, one way is to build it with 'make -j'
I've just compiled 2.4.32 with 'make -j bzImage' on a dual PIII 500MHz machine with 1.5Gb RAM - the number of processes shown in 'top' crept slowly towards 700, the highest number I saw was 767.
Just started a 2.6 compilation with the -j switch causing make to invoke more than 2700 processes and still increasing. It affects the responsiveness of my system (Athlon XP 1600, 1GB) significantly. I was talking about the defaults, initially. By default make is a nice citizen. Giving the -j switch without options is a bad idea, at least on multi-user systems :-) \Steve -- Steve Graegert <graegerts@gmail.com> Software Consultant {C/C++ && Java && .NET} Office: +49 9131 7123988 Mobile: +49 1520 9289212
participants (7)
-
Carl Hartung
-
Greg Wallace
-
James Wright
-
ken
-
Michael Green
-
Per Jessen
-
Steve Graegert