On Wed, Oct 25, 2023 at 9:58 PM Carlos E. R.
On 2023-10-25 20:47, Aaron Digulla wrote:
Am 25.10.23 um 17:42 schrieb Simon Becherer:
Am 25.10.23 um 16:42 schrieb Aaron Digulla:
Am 25.10.23 um 15:36 schrieb Simon Heimbach:
...
Sorry, I didn't phrase that well.
The SSD will have a small percentage (say 1-5%, I don't know for sure) of blocks as spares. Unlike normal hard disks, those spares are used all the time. In a traditional hard disk, when a block starts to fail, the controller will map a spare. So they are kept only for emergencies.
With SSDs, the controller remembers how often each block was written to. When you write new data, it will find a free block with the least usage and write the data there and remember "I saved the data for block 15 in the real memory address 0x... + increment the usage counter; the old real memory is now in the free pool". That way, every memory block will get roughly the same number of writes, even if you write block 15 a million times.
Do you know where that metadata is stored?
And it matters exactly how?
In the disk itself would cause a lot of wear.
The total amount of TBW (terabytes written) promised by the vendor accounts for any internal overhead for bookkeeping, translation layer, write levelling, garbage collection, write amplification etc. You never get to see this extra overhead anyway nor do you know what is the physical capacity of your SSD device, because you never use raw flash, but the whole device together with its controller and algorithms to manage raw flash.
Some type of permanent RAM?