On 2023-10-26 08:42, Andrei Borzenkov wrote:
On Wed, Oct 25, 2023 at 9:58 PM Carlos E. R.
wrote: On 2023-10-25 20:47, Aaron Digulla wrote:
Am 25.10.23 um 17:42 schrieb Simon Becherer:
Am 25.10.23 um 16:42 schrieb Aaron Digulla:
Am 25.10.23 um 15:36 schrieb Simon Heimbach:
...
Sorry, I didn't phrase that well.
The SSD will have a small percentage (say 1-5%, I don't know for sure) of blocks as spares. Unlike normal hard disks, those spares are used all the time. In a traditional hard disk, when a block starts to fail, the controller will map a spare. So they are kept only for emergencies.
With SSDs, the controller remembers how often each block was written to. When you write new data, it will find a free block with the least usage and write the data there and remember "I saved the data for block 15 in the real memory address 0x... + increment the usage counter; the old real memory is now in the free pool". That way, every memory block will get roughly the same number of writes, even if you write block 15 a million times.
Do you know where that metadata is stored?
And it matters exactly how?
I just would like to know :-)
In the disk itself would cause a lot of wear.
The total amount of TBW (terabytes written) promised by the vendor accounts for any internal overhead for bookkeeping, translation layer, write levelling, garbage collection, write amplification etc. You never get to see this extra overhead anyway nor do you know what is the physical capacity of your SSD device, because you never use raw flash, but the whole device together with its controller and algorithms to manage raw flash.
The remapping table would be just one number per sector, but one full rewrite of the table each time a sector in the disk changes (or maybe not, it is several sectors in size). I thought that would be too much wear. And the table itself would be subject to remapping? Then how can the firmware find it? -- Cheers / Saludos, Carlos E. R. (from openSUSE 15.5 (Laicolasse))