Maarten Sneep tapped away at the keyboard with:
For a data-acquisition program, I may need to create a *lot* of files in one directory. Does anyone know what the limit is? Does this depend on the inode size? How can I find the limit for my system (apart from trying...)
The limit is defined by the number of inodes in the filesystem. See dumpe2fs(8).
I currently have a e2fs based system, is there a point in taking another filesystem for the disk that will hold the data-files?
There is practically no limit on the number of files in a directory. It's a filesystem limit. If there will be very many small files, then make a filesystem with a large number of inodes. (This is waht's typically done for Usenet news.) What you will encounter though, is the performance limit imposed by linear searching of the directory when there are several hundred/thousand files within the same directory. This can quickly saturate filesystem performance. I'd recommend some sort of name-hashing scheme; putting many files into subdirectories perhaps based on the first character or two of their name; that means you get less than 700 or so sub-directories within the top level. (You want to ensure that the actual directory remains small; say less than 64kbytes.) If there will be very many files within each of the sub-directories, then do another level of hashing. i.e. a file named "abcdefg" would be stored as "ab/cd/abcdefg" The performance gains can be astonishing. More complex hashing is of course possible and may be justified if the filenames are all very similar. e.g. a variable-length hash may be appropriate, but you need to know the filenames beforehand, or be prepared to shuffle them around the directory tree as necessary. -- Bernd Felsche - Innovative Reckoning Perth, Western Australia -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq