----- Original Message -----
From: "Alleman, Lowell"
It looks like you posted this a few days back. If you're still having this problem, here are some things to try:
I believe that you can modify the maximum number of open files by looking at files in the /proc file system. ( I didn't look at any docs, I'm just going from memory here...I did try this on my system and it appears to work the way I expected.)
To see the maximum number of files type in:
# cat /proc/sys/fs/file-max 8192
My system reports 8192. I think you can change this number by login in as root, and doing some like this:
# echo 16384 > /proc/sys/fs/file-max
I checked to see if my entry was accepted by re-running
I tried this advice. now a mount-attempt gives this answer: /root/bin/mount: fork: Resource temporarily unavailable And the new file-max disappears after a reboot. A really nasty "blue screen of dead" just in a black console:-[ mvh Morten Christensen the first command.
This time I got back '16384', so it looks like it worked. I don't have an easy way to test this limit on my system...... I can't image that you would ever really need that many files open at once, but I'm no expert...
You can see what files are open by using the 'lsof' (LiSt Open Files) command:
# lsof
You may need to install it youself. (rpm -ivh lsof-x.xx.rpm)
If this doesn't help, you may want to check your installation. I'm suggesting this because what you have described sound kinda weird, and it never hurts to verify the integrity of your system. You can easily check the installed packages by running "rpm -Va" (this will (V)erify (a)ll of the packages on your system.) Pull up the rpm man page for help on interpreting the output.
-----Original Message----- From: Morten Christensen [SMTP:suse-sle@indbakke.dk] Sent: Saturday, March 16, 2002 9:41 AM To: suse-linux-e@suse.com Subject: [SLE] Too many open files
SuSE 7.3 pro. ReiserFS on harddrive. Enough free space on partitions.
I connot mount my cd-drive. On first attempt I get the message:
/bin/bash: eror while loading shared libraries: libhistory.so.4: cannot oven shared object file: Error 23
On second attempt I get: /root/bin/mount: /root/bin/mount: bad interpreter: Too many open files i system
I think I have got a lot of open files, when unpacking a rar-archive. I manualle deleted about 19.000 files from /tmp.
Any ideas?
mvh... Morten Christensen from Denmark
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com
Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
You might want to look at ulimit:
ulimit -a -- to see some limits.
You can also adjust them . See "bash(1)" -Section on ulimit.
Regards
Dan
----- Original Message -----
From: "Morten Christensen"
----- Original Message ----- From: "Alleman, Lowell"
To: "'Morten Christensen'" Cc: Sent: Friday, March 22, 2002 7:53 PM Subject: RE: [SLE] Too many open files Thanks for your answer.
It looks like you posted this a few days back. If you're still having this problem, here are some things to try:
I believe that you can modify the maximum number of open files by looking at files in the /proc file system. ( I didn't look at any docs, I'm just going from memory here...I did try this on my system and it appears to work the way I expected.)
To see the maximum number of files type in:
# cat /proc/sys/fs/file-max 8192
My system reports 8192. I think you can change this number by login in as root, and doing some like this:
# echo 16384 > /proc/sys/fs/file-max
I tried this advice.
now a mount-attempt gives this answer:
/root/bin/mount: fork: Resource temporarily unavailable
And the new file-max disappears after a reboot.
A really nasty "blue screen of dead" just in a black console:-[
mvh Morten Christensen
I checked to see if my entry was accepted by re-running the first command. This time I got back '16384', so it looks like it worked. I don't have an easy way to test this limit on my system...... I can't image that you would ever really need that many files open at once, but I'm no expert...
You can see what files are open by using the 'lsof' (LiSt Open Files) command:
# lsof
You may need to install it youself. (rpm -ivh lsof-x.xx.rpm)
If this doesn't help, you may want to check your installation. I'm suggesting this because what you have described sound kinda weird, and it never hurts to verify the integrity of your system. You can easily check the installed packages by running "rpm -Va" (this will (V)erify (a)ll of the packages on your system.) Pull up the rpm man page for help on interpreting the output.
-----Original Message----- From: Morten Christensen [SMTP:suse-sle@indbakke.dk] Sent: Saturday, March 16, 2002 9:41 AM To: suse-linux-e@suse.com Subject: [SLE] Too many open files
SuSE 7.3 pro. ReiserFS on harddrive. Enough free space on partitions.
I connot mount my cd-drive. On first attempt I get the message:
/bin/bash: eror while loading shared libraries: libhistory.so.4: cannot oven shared object file: Error 23
On second attempt I get: /root/bin/mount: /root/bin/mount: bad interpreter: Too many open files i system
I think I have got a lot of open files, when unpacking a rar-archive. I manualle deleted about 19.000 files from /tmp.
Any ideas?
mvh... Morten Christensen from Denmark
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com
Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
----- Original Message -----
From: "dan.am"
You might want to look at ulimit:
ulimit -a -- to see some limits.
You can also adjust them . See "bash(1)" -Section on
it says: open files 1024 far lower than /proc/sys/fs/file-max's 8192 ulimit. that part is above my learning-skill's:-[ mvh... Morten Christensen still missing a way to avoid a complete reinstall
I believe that you can modify the maximum number of open files by looking at files in the /proc file system. ( I didn't look at any docs, I'm just going from memory here...I did try this on my system and it appears to work the way I expected.)
To see the maximum number of files type in: # cat /proc/sys/fs/file-max 8192
My system reports 8192. I think you can change this number by login in as root, and doing some like this:
# echo 16384 > /proc/sys/fs/file-max
I tried this advice.
now a mount-attempt gives this answer:
/root/bin/mount: fork: Resource temporarily unavailable
And the new file-max disappears after a reboot.
A really nasty "blue screen of dead" just in a black console:-[
mvh Morten Christensen
participants (2)
-
dan.am
-
Morten Christensen