Mike McMullin wrote:
We see them as being intelligent, because they make our lives so much easier, but every file system has it's pluses and minuses, the real answer, which I don't have, is how the particular file system handles constantly expanding files, and the disk allocation for same.
That information is available online and in quality bookstores.
OTOH pc file systems are more like the office where files are tossed randomly after use, and every weekend, people are hired to come in and organize the files.
That's not true. FAT set's aside a certain amount of room for a particular file, cluster size, when a file grows past the cluster size allotted, another cluster is pointed to in the FAT table, and the file is continued there. This schema, ends up with wastes of space, and non-contiguous file parts. One defrags, in order to re-arrange the file written to the disk to be contiguously, thus avoiding the overhead due to the disks read/write head jumping all over the disk surface.
Perhaps i didn't make it clear, but the office files analogy was, well, an analogy. I can't even claim credit for it, I originally heard it from some French linux user. But the effect is the same, files get fragmented and have to be reorganized.
Having said that, there is a possibility of some fragmentation in unix file systems, and there have even been some tools to reorganize things, just as there have been anti-virus companies offering linux antivirus programs (!?) but in general neither are ever needed in practice.
How so in terms of the "defragger"? (I know about Linux virii.)
How so. in terms of the defragger? I've never defragged a linux filesystem, neither my desktops nor any of the hundreds of servers I've been responsible for, and I don't know any linux user who has. Joe -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org