Re: [opensuse-kernel] SSD / filesystem / TRIM
On Tue, Aug 31, 2010 at 9:05 AM, Mark Lord
On 10-08-31 08:51 AM, Greg Freemyer wrote:
Does hdparm v.9.30 have any known kernel dependencies that would keep it from working with 2.6.31?
Or any other known issues that would argue against opensuse pushing it out as a software update for 11.2 and 11.3?
Or at least putting it in a OBS repo so people using wiper.sh/hdparm can use the latest if they load if from the repo..
No issues that I know about. My own opinion, is that _anyone_ using hdparm should be using v9.30.
Cheers
Mark, Thanks Thomas, I opened a bugzilla about it: https://bugzilla.novell.com/show_bug.cgi?id=635920 I hope you'll add yourself as a cc: and test hdparm v9.30 if and when it is available for test. Thanks Greg -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
Hi! On 31/08/10 17:05, Greg Freemyer wrote:
On Tue, Aug 31, 2010 at 9:05 AM, Mark Lord
wrote: On 10-08-31 08:51 AM, Greg Freemyer wrote:
Does hdparm v.9.30 have any known kernel dependencies that would keep it from working with 2.6.31?
Or any other known issues that would argue against opensuse pushing it out as a software update for 11.2 and 11.3?
Or at least putting it in a OBS repo so people using wiper.sh/hdparm can use the latest if they load if from the repo..
No issues that I know about. My own opinion, is that _anyone_ using hdparm should be using v9.30.
Cheers
Mark, Thanks
Thomas, I opened a bugzilla about it: https://bugzilla.novell.com/show_bug.cgi?id=635920
I hope you'll add yourself as a cc: and test hdparm v9.30 if and when it is available for test.
Thanks Greg
hdparm v9.30 seems to work. I've run a quick test using my own build of hdparm (I am using a slightly modified wiper.sh below which picks up my own new hdparm executable instead of/sbin/hdparm). +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ $> sudo ./wiper.sh --verbose --commit /dev/sdb1 wiper.sh: Linux SATA SSD TRIM utility, version 2.8, by Mark Lord. rootdev=/dev/sdb1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 39790388 KB, reserved = 397903 KB Preparing for online TRIM of free space on /dev/sdb1 (ext4 mounted read-write at /). This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (39392485 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/home/thomas/hdparm-9.30/hdparm --fibmap WIPER_TMPFILE.9880 /dev/sdb: trimming 3833854 sectors from 64 ranges succeeded trimming 3375106 sectors from 64 ranges succeeded trimming 3735549 sectors from 64 ranges succeeded trimming 3817474 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded trimming 3784705 sectors from 64 ranges succeeded trimming 3833855 sectors from 64 ranges succeeded trimming 3866625 sectors from 64 ranges succeeded trimming 3850238 sectors from 64 ranges succeeded trimming 3768321 sectors from 64 ranges succeeded trimming 3883008 sectors from 64 ranges succeeded trimming 3801087 sectors from 64 ranges succeeded trimming 3719170 sectors from 64 ranges succeeded trimming 3555328 sectors from 64 ranges succeeded trimming 3833855 sectors from 64 ranges succeeded trimming 3915775 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded trimming 3932160 sectors from 64 ranges succeeded trimming 3866623 sectors from 64 ranges succeeded trimming 3899394 sectors from 64 ranges succeeded trimming 2681297 sectors from 49 ranges succeeded Removing temporary file.. Syncing disks.. Done. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This raises the following question: How often should the wiper.sh script be run? Cheers, Thomas -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
On Tue, Aug 31, 2010 at 4:03 PM, Thomas Hertweck
Hi!
On 31/08/10 17:05, Greg Freemyer wrote:
On Tue, Aug 31, 2010 at 9:05 AM, Mark Lord
wrote: On 10-08-31 08:51 AM, Greg Freemyer wrote:
Does hdparm v.9.30 have any known kernel dependencies that would keep it from working with 2.6.31?
Or any other known issues that would argue against opensuse pushing it out as a software update for 11.2 and 11.3?
Or at least putting it in a OBS repo so people using wiper.sh/hdparm can use the latest if they load if from the repo..
No issues that I know about. My own opinion, is that _anyone_ using hdparm should be using v9.30.
Cheers
Mark, Thanks
Thomas, I opened a bugzilla about it: https://bugzilla.novell.com/show_bug.cgi?id=635920
I hope you'll add yourself as a cc: and test hdparm v9.30 if and when it is available for test.
Thanks Greg
hdparm v9.30 seems to work. I've run a quick test using my own build of hdparm (I am using a slightly modified wiper.sh below which picks up my own new hdparm executable instead of/sbin/hdparm).
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ $> sudo ./wiper.sh --verbose --commit /dev/sdb1 wiper.sh: Linux SATA SSD TRIM utility, version 2.8, by Mark Lord. rootdev=/dev/sdb1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 39790388 KB, reserved = 397903 KB Preparing for online TRIM of free space on /dev/sdb1 (ext4 mounted read-write at /).
This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (39392485 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/home/thomas/hdparm-9.30/hdparm --fibmap WIPER_TMPFILE.9880
/dev/sdb: trimming 3833854 sectors from 64 ranges succeeded trimming 3375106 sectors from 64 ranges succeeded trimming 3735549 sectors from 64 ranges succeeded trimming 3817474 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded trimming 3784705 sectors from 64 ranges succeeded trimming 3833855 sectors from 64 ranges succeeded trimming 3866625 sectors from 64 ranges succeeded trimming 3850238 sectors from 64 ranges succeeded trimming 3768321 sectors from 64 ranges succeeded trimming 3883008 sectors from 64 ranges succeeded trimming 3801087 sectors from 64 ranges succeeded trimming 3719170 sectors from 64 ranges succeeded trimming 3555328 sectors from 64 ranges succeeded trimming 3833855 sectors from 64 ranges succeeded trimming 3915775 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded trimming 3932160 sectors from 64 ranges succeeded trimming 3866623 sectors from 64 ranges succeeded trimming 3899394 sectors from 64 ranges succeeded trimming 2681297 sectors from 49 ranges succeeded Removing temporary file.. Syncing disks.. Done. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This raises the following question: How often should the wiper.sh script be run?
Cheers, Thomas
For most users, I'd think once a week would be sufficient. That may seem too long, but I really think it should work as long as you are not running a workload that is creating and deleting tons of files (with content) at a high rate. In theory your SSD is handling known free areas like a FIFO and it only takes a few msecs after you put a block into the FIFO before it is erased and available to be pulled off the top. So as long as you have some sufficient available erased blocks, it can run at full speed pushing newly freed blocks on the bottom and pulling erased blocks off the top. If the workload were a database, then every write goes to a newly allocated block, but the old block is immediately freed and put into the erase queue. ie. SSDs do not allow for physical data blocks to be updated. Every write requires a new data block be pulled off the free block stack, and the previous physical block is freed for erasing. Thus a process like a database that keeps updating previously allocated data blocks just causes a lot of churn, but doesn't really make the stack of free blocks any smaller. It is file deletion that leaves the SSD assuming the blocks are valuable and thus not available for erasing, while the filesystem itself knows those blocks will never be used again. Thus a process the creates and deletes files rapidly causes the FIFO of available blocks to shrink. So the whole purpose of running wiper.sh is to tell the SSD to put the no longer allocated blocks into the erase / free queue. As long as that SSD doesn't stall trying to pull blocks off the top of that queue, it really doesn't matter how deep it is. So if you have 10GB of free space on your partition, you only need to call wiper.sh once every 10GB worth of file deletions. fyi: I put the above in the wiki page. Greg (the wordy) -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
participants (2)
-
Greg Freemyer
-
Thomas Hertweck