Hi All, I have a interesting problem which I have done a work around on, but in a nutshell, I have a java to MS-SQL application which uses JDBC to communicate, there is no problem with port access, but every so often, the java process will spit out: Too Many Open Files and croak. Now, according to several sources, I should be able to increase the max number of open files per process with 'ulimit -n <nnnn> where nnnn is the number of files I want. However, when I attempt to do this, I get a message from BASH stating: billp@nermal:~> ulimit -n 4096 bash: ulimit: cannot modify limit: Operation not permitted Here is a output from ulimit -a core file size (blocks) 0 data seg size (kbytes) unlimited file size (blocks) unlimited max locked memory (kbytes) unlimited max memory size (kbytes) unlimited open files 1024 pipe size (512 bytes) 8 stack size (kbytes) unlimited cpu time (seconds) unlimited max user processes 8191 virtual memory (kbytes) unlimited Now, if I do this as root, it doesn't complain at all (works for the session only). I have tried to modify /etc/security/limits.conf and /etc/pam.d/login to compensate, but all have resulted in no success. To work around the problem, I had to re-compile the linux kernel and change the value of 'INR_OPEN' in /usr/src/linux/include/linux/fs.h from 1024 to 4096, which gives me more files per process. On a machine which has the new kernel, this is what ulimit -a now produces: core file size (blocks) 0 data seg size (kbytes) unlimited file size (blocks) unlimited max locked memory (kbytes) unlimited max memory size (kbytes) unlimited open files 4096 pipe size (512 bytes) 8 stack size (kbytes) unlimited cpu time (seconds) unlimited max user processes 2048 virtual memory (kbytes) unlimited According to information I have read, the default open file limit per process is defined at 1024 (is this correct), and did I do this the correct way? I have also increased the value of /proc/sys/fs/file-max to 32767 on system boot up by doing echo 32767 > /proc/sys/fs/file-max -Bill