Greg Freemyer wrote:
ntfs-3g has been supported since 10.3. It provides full ntfs read/write.
How stable and how fast is it?
How well does it handle the esoteric NTFS stuff not in Linux -- like the access lists and such?
Access Lists are in Linux!!!! They just have different values.
Exactly. Sorry, wasn't clear -- am aware of linux access lists, but not only are the semantics different (oddly 'NT'), but I was interested in cross compatible file-formats between NTFS and Linux so I could use linux tools, for example to backup and restore NTFS filesystems. Backups on Win into a dump or tar format would be nice -- but with tar, for example, I don't think it would restore all the XP ACL's and modes -- not to mention NTFS's alternate data streams. AFAIK, only 'xfs' supports any alternate data stream -- and it's is fairly limited if I remember (limited size: 256K?, dunno if per/entry or total, values are stored something like 'environment' vars (name=value)). While useful for storing limited ACL stuff, not sure about general usefulness).
man -k acl shows you a few linux tools to work with them.
Yeah -- I keep meaning to play w/them, but something that "irks me" -- if I compile a kernel w/acls, then sometime later run a kernel w/o ACL support, aren't the access lists ignored? I.e. not only would the lists get ignored -- but any copying and restoring by "ignorant utils" and the lists might just 'disappear' -- even if I'm on a kernel with ACL support. It may be a non-issue, but I look at NTFS, and think that it wouldn't be likely to boot up a version of NT w/o ACL support. OTOH, I don't know how the linux NTFS driver respects or deals with ACL's, so it may be a similar problem -- which could make a dual boot machine an easy way to get around an ACL lists.
Wasn't there a 4G limit on FAT32?...or is that just XP creation?
file limit, yes. filesystem limit is 32GB in XP I think. I have done 750GB in Linux with mkfs.vfat.
Yeah -- 32GB -- that sounds familiar (been a while since I created a FAT32 on XP). The linux FAT utils are sweet, but I have made FAT partitions on linux that XP refused to recognize...never was quite sure why -- all the params seemed to be "within reason"...(kept allocation units <64K, for example, FAT was under 256K in mem)... Wasn't so important for what I needed at the time...I think I was trying for the minimum file entries in the root dir & 1 FAT (was going to put the pagefile on it).
Still with a 32-bit FAT (1G), isn't it pretty much the case that the FAT's themselves need to be resident in memory all at once to maintain consistency? That sorta limits how big volumes might get.
Don't think so. I have not noticed our large drives being particularly slow.
That doesn't surprise me -- I'd expect linux to fix that before windows and make it faster.
Processes don't update FAT tables in general. The kernel FS driver does that. And I'm sure they have it down pat by now.
Yeah -- for sake of integrity, probably sufficient to limit to single writers/readers. Was just thinking of multi-processing cases where one file is writing/updating the FAT, but another might be reading -- and could get inconsistent values if the reader read the FAT table in the middle of an update -- seems like that could cause some unpredictability -- but is easily avoided if you only allow exclusive access if someone wants to write it.
I know there are plenty of file systems on linux -- but virtually none of them are ported to Win32, and I can't see NTFS becoming a defacto-industry standard as long as MS sits on it as proprietary.
We still use FAT as our open standard. With big files we break them apart via split. Re-assemble with cat.
--- Yeah -- a pain -- especially if copying a large file -- someone else claimed a 4G FAT limit, but I know I hit limits at 2G more than once. So I'm fairly sure that above 2G isn't real "safe" or "portable" -- might have been the 32-bit software accessing the file -- its been a while since I ran into it w/FAT (cause I switched to NTFS on Win because of the large file prob).
Our industry (Computer Forenisics) actually has lots of tools that work with the split files since the need to so great.
I'm sure -- but for user-level software -- and cygwin, I can't see them automatically supporting files >2GB if the OS doesn't natively support it. Been a while. As far as I know, EXT3 wouldn't be a good choice for removable disks because the journaling would cause excess wear -- and I don't think that all flashrams have 'nearly unlimited' (from a user perspective, not in an absolute sense) read/write cycle capacity...residual capacitance charge? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org