On 04/12/2019 19.13, Carlos E. R. wrote:
On 04/12/2019 19.07, James Knott wrote:
On 2019-12-04 12:55 PM, Carlos E. R. wrote:
Ok, but I think that exfat is better than ntfs, on cards and sticks at least. It is optimized for that use. Unless you need the stick/card to use on some old machine that doesn't understand that filesystem, like an ancient TV set.
How is EXT4 on those devices? That's what most of my storage is configured for.
I don't know. I use it with the journal disabled.
However: many sticks are actually optimized for FAT. They use smaller write blocks on that area, optimizing it for frequent writes. I don't have a cite for this, I read it on some report.
Remember that they normally use a "write block" (I don't know the actual name) that is not 512 or 4 KiB as the filesystem uses, but some value that I think is 16 KiB, not sure. meaning that when you write a 512 byte block, actually the entire 8 K section is written. But the FAT area has a smaller block, unknown size.
I found the reference for this. The link is dead, but I found a post of mine with the relevant section - google does not find the referenced text: It appears that the makers of these sticks do optimize the media to allow for multiple write cycles in the start region of the flash area, precisely where the FAT would go. If you format with a different filesytem like ext3 or ntfs, wear, and thus life expectancy, is impaired. Also, the partition table is made differently than what fdisk does if you repartition the drive. <https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey> LINARO: Flash memory card design +++······························· FAT optimization ================== Most portable flash media come preformatted as with a FAT32 file system. This is not only done because there is support for this file system in all operating systems, it is actually a reasonably good choice for the media: The data on a FAT32 file system is always written in clusters of e.g. 32 KB, and the media are normally formatted with a cluster size matching the optimum write size, as well as aligning the clusters to the start of internal units, and the access patterns on a FAT32 file system are relatively predictable, alternating between data blocks, file allocation table (FAT) and directories. The cards take advantage of this knowledge by optimizing for the access patterns that are observed on FAT32, which unfortunately can lead to worst-case access patterns when using ext3 or other Linux file systems. In particular, the following (mis-)optimizations are commonly seen on flash media: * Most allocation groups are in linear write mode by default, only the first one or two allocation groups allow efficient write patterns in smaller units at all times. Since the FAT is known to be in the beginning of the partition, the controller only needs to allow writing small updates there, while it can expect other parts of the medium to be used for large image or video files. * SDHC cards in particular rely on a specific partition table layout that guarantees the start of the partition to be aligned to a full allocation group (typically 4 MB), so that the FAT actually ends up in the location that is optimized for it. Repartitioning the device with fdisk usually moves the start of each partition to a cylinder boundary in C/H/S addressing. Due to legacy reasons and backward-compatibility with MS-DOS, the cylinder usually has 255 heads and 63 sectors of 512 bytes, which puts the start of the first partition just behind the optimum area. To make matters much much worse, the alignment of the partition then becomes just 512 bytes instead of 4 MB, which gives worst-case behavior when the file system attempts to do aligned writes to the partition. * Only a small number of allocation groups is kept open at a time, on many SD cards only a single one, the largest observed number of open erase blocks was ten. Writing data to another allocation unit while having multiple units open causes the least recently used one to go through garbage collection. In the worst case, this can lead to the card writing a full allocation unit of multiple megabytes for each 512 byte block that gets written by the file system. All authentic Sandisk cards tested so far can write to six allocation units, and keep the most commonly written ones open, while most cheap cards have a smaller number and also use a simple one-stage least-recently-used algorithm for deciding with AU to clean up. * The smallest write unit is significantly larger than a page. Reading or writing less than one of these units causes a full unit to be accessed. Trying to do streaming write in smaller units causes the medium to do multiple read-modify-write cycles on the same write unit, which in turn causes multiple garbage collection cycles for writing a single allocation group from start to end. Small SD (non-SDHC) and MMC cards, as well as most USB sticks of any size use the page size as write unit. ·······························++- More links: <http://www.h-online.com/open/features/Kernel-Log-Coming-in-3-8-Part-1-Filesystems-and-storage-1788524.html> Kernel Log - Coming in 3.8 (Part 1) -- Filesystems and storage (F2fs) <http://lwn.net/Articles/518988/> LWN: An f2fs teardown <http://lwn.net/Articles/470553/> LWN: Improving ext4: bigalloc, inline data, and metadata checksums -- Cheers / Saludos, Carlos E. R. (from 15.1 x86_64 at Telcontar)