[opensuse] SSD and smartctl: Percentage Used Endurance Indicator: 34 ?
I tried smartctl on my SSD with smartctl -l ssd /dev/sda and got: smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.1.10-1.19-desktop] (SUSE RPM) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 34~ Percentage Used Endurance Indicator |_ ~ normalized value manpage says: "ssd - [SCSI] prints the Solid State Media percentage used endurance indicator. A value of 0 indicates as new condition while 100 indicates the device is at the end of its lifetime as projected by the manufacturer. The value may reach 255." How accurate is this? Does that mean, that 34% of the SSDs lifetime is already over? I do have /var with leafnode/usenet (lot of small files written) on that SSD but what I read on the net this shouldn't be a problem and approx. 2 years is not that much of lifetime. If that all is correct it would mean, that after another 4 years I have to exchange the SSD. ~6 years is that the normal lifespan of SSDs nowadays? Are there other tests for Linux which tell me more about the health of this SSD? best regards ME -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 12:48 +0200, MarkusGMX wrote:
How accurate is this?
I guess that pretty accurate.
Does that mean, that 34% of the SSDs lifetime is already over?
Yep, I think so.
I do have /var with leafnode/usenet (lot of small files written) on that SSD but what I read on the net this shouldn't be a problem and approx. 2 years is not that much of lifetime. If that all is correct it would mean, that after another 4 years I have to exchange the SSD.
Yep.
~6 years is that the normal lifespan of SSDs nowadays?
It is not based in time of use, but in actual usage. Flash media, including SSD, have a limited number of write operation allowed. After that, they are no longer usable. You can increase the odds. Be sure to mount "noatime", for instance.
Are there other tests for Linux which tell me more about the health of this SSD?
This is not really a test; it is the disk itself who is telling you that parameter about itself. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYBtkACgkQtTMYHG2NR9XUxQCeM9DM7/Rz16CLUPRqcsOZgfoP lycAn2v7OjSVpH0r7Xx727ApNM4D7tL5 =EuuT -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 06/07/13 14:00, schrieb Carlos E. R.: [...]
On Saturday, 2013-07-06 at 12:48 +0200, MarkusGMX wrote:
How accurate is this?
I guess that pretty accurate.
Ok.
Does that mean, that 34% of the SSDs lifetime is already over?
Yep, I think so.
:-/ I thought that SSDs may last a bit longer...
I do have /var with leafnode/usenet (lot of small files written) on that SSD but what I read on the net this shouldn't be a problem and approx. 2 years is not that much of lifetime. If that all is correct it would mean, that after another 4 years I have to exchange the SSD.
Yep.
~6 years is that the normal lifespan of SSDs nowadays?
It is not based in time of use, but in actual usage. Flash media, including SSD, have a limited number of write operation allowed. After that, they are no longer usable.
Ok, so with the current average usage there will only be something like another 4 years. Good to know. Are there some possibilies that smartctl may warn me at -say- "90~ Percentage Used Endurance Indicator" without making my own cron job and some script?
You can increase the odds. Be sure to mount "noatime", for instance.
Currently on this SSD: swap is on this SSD /boot ext4 acl,user_xattr 1 2 /var ext4 acl,user_xattr 1 2 /windows/B ntfs-3g fmask=133,dmask=022,locale=en_GB.UTF-8 0 0 where /windows/B is the 100MB boot partition of Windows 7 The mounting options were done by SuSE 12.1 out of the box. Most of the access/write will be /var and swap I assume. /boot and /windows/B won't be much writing but that doesn't matter if the SSD fails. :-( So the /var needs a noatime in /etc/fstab ? Is there some possible improvement for swap ? Does the newer SuSE 12.3 improve the mounting options for SSDs when installing 12.3 ?
Are there other tests for Linux which tell me more about the health of this SSD?
This is not really a test; it is the disk itself who is telling you that parameter about itself.
So it has to be somehow accurate. Best regards and thanks for your time ME -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I don't claim to be any kind of expert and had to do some googling to see what this was about. It seems that flash based SSD's have a write limit somewhere in the range of 2,000 to 3,000 before failure. What that equates to in time I have no idea. Dram based SSD's don't have this limit. It also seems that a big bug-a-boo is the onboard controler. I also came up with a lifertime for a mechanical hard drive. It appears that after about six years of use your on borrowed time. They seem to have a lifetimes of somewhere in the range 6 to 9 years. I'm sure that how you use your computer has a lot to do with the life of the hard drive. I'm probably amongst the hardest on them as I hardly ever turn off my computer [ Desktops ]. -- “Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.” -Albert Einstein "A great moral force is a bayonet on a gun and a web-belt full of cartridges." -Colonel John S. Poland, U. S. Army - 1894 "In all border lands a certain class of men have to be killed to insure the welfare and safety of the peaceably inclined." - Judge William L. Kuykendall - 1917 _ _... ..._ _ _._ ._ ..... ._.. ... .._ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 08:43 -0500, Billie Walsh wrote:
I don't claim to be any kind of expert and had to do some googling to see what this was about. It seems that flash based SSD's have a write limit somewhere in the range of 2,000 to 3,000 before failure. What that equates to in time I have no idea. Dram based SSD's don't have this limit. It also seems that a big bug-a-boo is the onboard controler.
The wikipedia article on flash media talks of up to a million cycles.
I also came up with a lifertime for a mechanical hard drive. It appears that after about six years of use your on borrowed time. They seem to have a lifetimes of somewhere in the range 6 to 9 years. I'm sure that how you use your computer has a lot to do with the life of the hard drive. I'm probably amongst the hardest on them as I hardly ever turn off my computer [ Desktops ].
You have to look at the hours of use, not the years. On a laptop it seems that 4000 to 6000 may be the limit, whereas on a desktop 10000 to 20000 is typical (mine have 12000). - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYX0wACgkQtTMYHG2NR9U7ZQCfa8rXwKu0YybRzxR/sj2KPFNu WDsAnivPM9nNCwWDcIVxspMnLzldFPPH =kdu6 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 06.07.2013 20:17, schrieb Carlos E. R.: ...
The wikipedia article on flash media talks of up to a million cycles.
...
You have to look at the hours of use, not the years. On a laptop it seems that 4000 to 6000 may be the limit, whereas on a desktop 10000 to 20000 is typical (mine have 12000).
lucky you! My Corsair 60GB SSD from 2008/9 (?) gave up irrecoverably with approx. 4000 hours during only 3 periods of 6 month over 3 years (with pauses of 6 month unused). I used it for / incl. /swap and /boot, without /srv and /home, where all my data is. As I have 16GB Ram I guess swap was not used very often. I went back to a "normal" hd. Yes, booting was a pleasure with SSD, so incredibly fast, and large programs opened in a instant, but well... Maybe nowadays SSD are much better, don't know. It's not a time where things generally get better :-) Daniel -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 20:46 +0200, Daniel Bauer wrote:
Am 06.07.2013 20:17, schrieb Carlos E. R.: ...
The wikipedia article on flash media talks of up to a million cycles.
...
You have to look at the hours of use, not the years. On a laptop it seems that 4000 to 6000 may be the limit, whereas on a desktop 10000 to 20000 is typical (mine have 12000).
lucky you!
My Corsair 60GB SSD from 2008/9 (?) gave up irrecoverably with approx. 4000 hours during only 3 periods of 6 month over 3 years (with pauses of 6 month unused).
Wait. My first paragraph above refers to flash media; the articles mentions 10000 cyles for some media, 10⁵ for better media, and 10⁶ for even better media. And my second paragraph refers to standard magnetic media.
Maybe nowadays SSD are much better, don't know. It's not a time where things generally get better :-)
I guess they improve. Just read another paragraph in the wikipedia where they say that Macronix is working on a device that self heals itself with "heaters" inside the chip, improving cycle expectations to 100·10⁶. There are no comercial chips of that class yet. http://en.wikipedia.org/wiki/Flash_memory#Memory_wear - search fo Macronix - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYajMACgkQtTMYHG2NR9VY8ACbBHRiJXaozVzyDGojR57a8R9D hV8AnjgPP0i0nv3sEcONQCIYnP2jEG/5 =fCGV -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 14:30 +0200, MarkusGMX wrote:
Are there some possibilies that smartctl may warn me at -say- "90~ Percentage Used Endurance Indicator" without making my own cron job and some script?
Eummm... maybe... there is an smartd daemon, it might do that, either on its own, or by some configuration parameter. I would have to read the manual again to be certain ;-) By default, smartd warns of impending disaster on the following 24 hours, I have seen that happen. It triggers when a parameter passes the trigger point, but it is defined by the manufacturer.
You can increase the odds. Be sure to mount "noatime", for instance.
Currently on this SSD: swap is on this SSD /boot ext4 acl,user_xattr 1 2 /var ext4 acl,user_xattr 1 2 /windows/B ntfs-3g fmask=133,dmask=022,locale=en_GB.UTF-8 0 0 where /windows/B is the 100MB boot partition of Windows 7
The mounting options were done by SuSE 12.1 out of the box.
If you issue the command "mount" you see the actual options used. I think "relatime" is on by default.
Most of the access/write will be /var and swap I assume. /boot and /windows/B won't be much writing but that doesn't matter if the SSD fails. :-(
So the /var needs a noatime in /etc/fstab ? Is there some possible improvement for swap ? Does the newer SuSE 12.3 improve the mounting options for SSDs when installing 12.3 ?
Not that I know. There is another possibility, using "laptop-mode-tools". The setting it changes are more for power saving on laptops, but some of those settings are useful for SSD, because it means delaying disk writes even for minutes. The log is written to often. I think there is an automatic write from the kernel about every 7 seconds or so.
This is not really a test; it is the disk itself who is telling you that parameter about itself.
So it has to be somehow accurate.
Right. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYXdAACgkQtTMYHG2NR9U9tgCdGSyHI1ZSMCGv9RqY0nxAXaKr dhIAnip5I2YGv8GEnqHPz9Ip1zZBk+5D =ZmKU -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 06/07/13 08:30, MarkusGMX escribió:
:-/ I thought that SSDs may last a bit longer...
Look into the bright side, at least your SSD has this indicator, I have two that don't support it. :-| -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 14:43 -0400, Cristian Rodríguez wrote:
El 06/07/13 08:30, MarkusGMX escribió:
:-/ I thought that SSDs may last a bit longer...
Look into the bright side, at least your SSD has this indicator, I have two that don't support it. :-|
Interesting... are you two using the same brand? I mean, is this a recent addition, or is it something that some brands have and others do not? - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYZeoACgkQtTMYHG2NR9UgtwCfUR9Pl5ElTZH+LwFRgl/xq2LG Q4YAn27jHfIq+W/h5sLpXCQ4oHDf4tbk =NYSH -----END PGP SIGNATURE-----
* Carlos E. R.
On Saturday, 2013-07-06 at 14:43 -0400, Cristian Rodríguez wrote:
El 06/07/13 08:30, MarkusGMX escribió:
:-/ I thought that SSDs may last a bit longer...
Look into the bright side, at least your SSD has this indicator, I have two that don't support it. :-|
Interesting... are you two using the same brand?
I mean, is this a recent addition, or is it something that some brands have and others do not?
My Intel SSD does not support that indicator :^(. It is 2 years old: Model: "INTEL SSDSC2MH12" Vendor: "INTEL" Device: "SSDSC2MH12" Revision: "PPG2" Serial ID: "LNEL107600VA120CGN" -- (paka)Patrick Shanahan Plainfield, Indiana, USA HOG # US1244711 http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 http://en.opensuse.org openSUSE Community Member Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 06/07/13 14:46, Carlos E. R. escribió:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Saturday, 2013-07-06 at 14:43 -0400, Cristian Rodríguez wrote:
El 06/07/13 08:30, MarkusGMX escribió:
:-/ I thought that SSDs may last a bit longer...
Look into the bright side, at least your SSD has this indicator, I have two that don't support it. :-|
Interesting... are you two using the same brand?
The two I have are: - Intel X25-M G2 SSDs [80.0 GB] - SAMSUNG SSD PM810 [256 GB] -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 06/07/13 21:59, schrieb Cristian Rodríguez:
El 06/07/13 14:46, Carlos E. R. escribió:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Saturday, 2013-07-06 at 14:43 -0400, Cristian Rodríguez wrote:
El 06/07/13 08:30, MarkusGMX escribió:
:-/ I thought that SSDs may last a bit longer...
Look into the bright side, at least your SSD has this indicator, I have two that don't support it. :-|
Interesting... are you two using the same brand?
The two I have are:
- Intel X25-M G2 SSDs [80.0 GB] - SAMSUNG SSD PM810 [256 GB]
Mine is a Crucial M4 128GB with the "Percentage Used Endurance Indicator". BR ME -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
\
I do have /var with leafnode/usenet (lot of small files written) on that SSD but what I read on the net this shouldn't be a problem and approx. 2 years is not that much of lifetime. If that all is correct it would mean, that after another 4 years I have to exchange the SSD.
Yep.
~6 years is that the normal lifespan of SSDs nowadays?
It is not based in time of use, but in actual usage. Flash media, including SSD, have a limited number of write operation allowed. After that, they are no longer usable.
Ok, so with the current average usage there will only be something like another 4 years. Good to know.
Are there some possibilies that smartctl may warn me at -say- "90~ Percentage Used Endurance Indicator" without making my own cron job and some script?
Your implication is that when it hits 100% it's lights out. That's not how I understand it. Your SSD should be overprovisioned.10 or 20% to allow the garbage collector to efficiently maintain a collection of pre-erased erase blocks for your writes to go to. The goal of the wear-leveling algorithm and the garbage collector is to keep the write cycles fairliy evenly spread across your EBs (erase blocks), but there will be variation across them with some of them likely still having the original data on them even near the end of life for some of the EBs. When the garbage collector starts to see some EBs hitting near end of life, it should start swapping them out with some of the ones that have extra stable data on them. By doing that a new supply of your EBs is added into the mix. Thus when the most used EBs are hitting near EOL, a new supply of relative new EBs might suddenly be swapped in. The end result is that it is very hard to know when the true EOL for the SSD is. Further, as individual EBs do actually hit EOL, that just pulls them out of the available pull for wear-leveling / garbage collection. The first ones to go won't have a noticeable effect. As more and more are pulled out of the rotation, the SSD performance will start to drop. Discarding your unused data blocks from time to time will maintain performance even as some EBs hit EOL. Ext4 supports fstrim to do that. Adding fstrim to a cron entry once a month (or every reboot) might help keep your SSD in better shape. I don't know what will happen when your SSD hits 100% used, but it may just be that SSD starts to see a gradual drop in performance. Further, a backup, mkfs discard, restore of all the data on the SSD would allow the wear leveling algorithm to get a full refresh of all the EBs. You might want to consider doing that now so your SSD can get a fresh start at the wear leveling. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* Carlos E. R.
~6 years is that the normal lifespan of SSDs nowadays?
It is not based in time of use, but in actual usage. Flash media, including SSD, have a limited number of write operation allowed. After that, they are no longer usable.
You can increase the odds. Be sure to mount "noatime", for instance. [...]
Is relatime a valid option in fstab for openSUSE >= 12.3, kernel is 3.9.8 and/or 3.10.0 Saw discussion on web that relatime would be happy medium rather than as extreem as noatime: "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified." tks, -- (paka)Patrick Shanahan Plainfield, Indiana, USA HOG # US1244711 http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 http://en.opensuse.org openSUSE Community Member Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* Patrick Shanahan
* Carlos E. R.
[07-06-13 08:01]: [...] ~6 years is that the normal lifespan of SSDs nowadays?
It is not based in time of use, but in actual usage. Flash media, including SSD, have a limited number of write operation allowed. After that, they are no longer usable.
You can increase the odds. Be sure to mount "noatime", for instance. [...]
Is relatime a valid option in fstab for openSUSE >= 12.3, kernel is 3.9.8 and/or 3.10.0
Saw discussion on web that relatime would be happy medium rather than as extreem as noatime: "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified."
Made the change to test; added noatime to all but separate home which contains "mail" and after remount, see that noatime was converted to relatime for those partitions containing tmp and/or tmpfs parts. To save SSD life, I moved swap to a swap-file on a md0 non-ssd part some time ago. SSD drive contains home, /, /var and /srv, with Music, Downloads and Documents linked to md0 to better utilize space requirements. SSD is 120 gb, md0 2 tb. -- (paka)Patrick Shanahan Plainfield, Indiana, USA HOG # US1244711 http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 http://en.opensuse.org openSUSE Community Member Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 6 Jul 2013 11:05:34 -0400
Patrick Shanahan
To save SSD life, I moved swap to a swap-file on a md0 non-ssd part some time ago.
It is probably better to have a lot of RAM, so that swap is not used that often, and move swap back to SSD as difference in access speed is rather dramatic; you don't have movable parts that need repositioning and pure transfer speed is few times higher. Of course if you use hibernation that will tax swap space, but when one is worn out, after say 2000 writes, you can use different part of the disk that wasn't used that much and repeat that as long as remain disk capacity is sufficient for your needs. The lifetime of a drive is different from individual memory cell endurance as built in controller will stop using bad block, just the same way it happens with classic, mechanical, drives. How far this can go depends on a size of memory that is allocated to hold index of bad blocks. -- Regards, Rajko. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* Rajko
On Sat, 6 Jul 2013 11:05:34 -0400 Patrick Shanahan
wrote: To save SSD life, I moved swap to a swap-file on a md0 non-ssd part some time ago.
It is probably better to have a lot of RAM, so that swap is not used that often, and move swap back to SSD as difference in access speed is rather dramatic; you don't have movable parts that need repositioning and pure transfer speed is few times higher.
8 gb of ram so swap is infrequently accessed and access/transfer speed hardly ever noticed. Perhaps negligible effect either way so present method really of little importance. Large amounts of ram do contribute to sloppy management, ie: many pages open on firefox although rarely accessed, ... Private memory: Firefox 1.4 GB Darktable 1.6 GB Virtuoso-t 0.5 GB Other usage: Cache 4.3 GB Free 0.9 GB Swap 0.0 GB
Of course if you use hibernation that will tax swap space, but when one is worn out, after say 2000 writes, you can use different part of the disk that wasn't used that much and repeat that as long as remain disk capacity is sufficient for your needs.
Desktop so hibernation not an issue.
The lifetime of a drive is different from individual memory cell endurance as built in controller will stop using bad block, just the same way it happens with classic, mechanical, drives. How far this can go depends on a size of memory that is allocated to hold index of bad blocks.
Understood, tks Presently considering increasing ram to 16 GB and removing swap completely and replacing 120 GB SSD (really adding and reassigning function of) w/240 GB or 500 GB SSD. tks, -- (paka)Patrick Shanahan Plainfield, Indiana, USA HOG # US1244711 http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 http://en.opensuse.org openSUSE Community Member Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 13:12 -0400, Patrick Shanahan wrote:
Desktop so hibernation not an issue.
I hibernate my desktop several times a day ;-) - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYYcAACgkQtTMYHG2NR9UfbgCfYaslTw0OJdRClyZbTbzNFynY sSwAnjzRodvLTI4FmttVnc5K57cWOAD4 =n1oP -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 6 Jul 2013 13:12:42 -0400
Patrick Shanahan
Desktop so hibernation not an issue.
In any case I got corrections in a private mail from Lars Marowsky-Bree that drive rotates blocks on it own, so no need to manually change swap position. I will find few articles about SSD specifics and read them again, but obviously I mixed concepts. Partition is a kernel level term about grouping parts of hard disk by their proximity on platters, while handling blocks is firmware related term. With classic hard disk it was good idea to keep replacements close on the disk to minimize time consuming head movements, but with SSD there is no real need for that as setting an address will take almost the same amount of time. -- Regards, Rajko. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Jul 6, 2013 at 12:34 PM, Rajko
Of course if you use hibernation that will tax swap space, but when one is worn out, after say 2000 writes, you can use different part of the disk that wasn't used that much and repeat that as long as remain disk capacity is sufficient for your needs.
That is handled transparently by the wear-leveling algorithm. ie. every time you write data to a SSD data block it picks from the available EB (erase blocks) the one with the least number of writes. I don't the process is perfect due to various design tradeoffs, but that is the goal at least. The other part of the story is the garbage collector which has the job of going through the partially used EBs and consolidating them to free EBs. When it does this the formally used EBs are erased and added to the stack of available for writing EBs. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 22:44 -0400, Greg Freemyer wrote:
That is handled transparently by the wear-leveling algorithm. ie. every time you write data to a SSD data block it picks from the available EB (erase blocks) the one with the least number of writes. I don't the process is perfect due to various design tradeoffs, but that is the goal at least.
The other part of the story is the garbage collector which has the job of going through the partially used EBs and consolidating them to free EBs. When it does this the formally used EBs are erased and added to the stack of available for writing EBs.
And where is that metadata stored, in flash, too? Fecause it will wear out faster, and I think that this part can not be remapped itself. It would have to be some other type of memory, cmos ram perhaps, which could be written on flash periodically or on power down. Just a wild guess. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHZqXwACgkQtTMYHG2NR9VBFQCfdETE8KgbeNbVB2yegI0q7ykG EVYAn2JuyrwrManITYhEeDx8xHpVzkmD =bf+C -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 7 Jul 2013 19:46:36 +0200 (CEST)
"Carlos E. R."
And where is that metadata stored, in flash, too? Fecause it will wear out faster, and I think that this part can not be remapped itself.
Why not? Because it was declared holly and no one should touch it :) It is not firmware that might have certain entry point that is hard-coded in a CPU. It is just a table that is referenced from firmware which is in a flash memory and can be changed when table is moved elsewhere.
It would have to be some other type of memory, cmos ram perhaps, which could be written on flash periodically or on power down. Just a wild guess.
It is used flash to hold data when power is not present, but it is different type of flash that can survive 100,000 P/E cycles, unlike 3000-5000 for the rest of the disk. Some use cache that is RAM with just enough energy backup to allow transfer to flash when outside power source is shut down. Some probably use both, RAM and more durable flash. What option is used depends entirely on price of particular solution at certain point in time. Time for a whole new energy storage concepts, with capacity that is many time higher then current, is coming. -- Regards, Rajko. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 11:05 -0400, Patrick Shanahan wrote:
SSD drive contains home, /, /var and /srv, with Music, Downloads and Documents linked to md0 to better utilize space requirements. SSD is 120 gb, md0 2 tb.
I read recently that flash media has the characteristic that each time you read a cell it has to be rewritten, and that the adjacent cells are affected; after severa cycles, the adjacent cells have to be rewritten as well (or something of the sort, I think I got the details wrong). Here: http://en.wikipedia.org/wiki/Flash_memory#Read_disturb It is the wikipedia, so take it with a grain of salt... - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYYTIACgkQtTMYHG2NR9V5ggCfQQL/Ev939bVG7mAhP8weXQEn z/cAn2HC11ZjAk1wWRr+PR2oUy+pyUnf =qaoE -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2013-07-06 at 10:30 -0400, Patrick Shanahan wrote:
* Carlos E. R. <t> [07-06-13 08:01]:
You can increase the odds. Be sure to mount "noatime", for instance.
Is relatime a valid option in fstab for openSUSE >= 12.3, kernel is 3.9.8 and/or 3.10.0
relatime is on by default on current kernels. I have used noatime for ages on several partitions with no noticiable issue. The only program known to really need that timestamp is mutt - IIRC, you use it.
Saw discussion on web that relatime would be happy medium rather than as extreem as noatime:
Yes. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHYX98ACgkQtTMYHG2NR9Xp7ACcD2YvjFoNnzZ/Zd3BEgj3x7E7 7SAAn3RbkAIgTJEkBmmb1ExiAqd2dXFj =eHSj -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 06 Jul 2013 12:48:42 +0200
MarkusGMX
(rev 1) == 7 0x008 1 34~ Percentage Used Endurance ... manpage says: "ssd - [SCSI] prints the Solid State Media percentage used endurance indicator. A value of 0 indicates as new condition while 100 indicates the device is at the end of its lifetime as projected by the manufacturer. The value may reach 255."
How accurate is this?
Note "projected by the manufacturer" is not defined in any way that will tell us how to understand that 34%. Although: http://www.crucial.com/pdf/Tech_specs-letter_Crucial_m4_ssd_v3-11-11_online.... tells that 128GB model has 72TB endurance, which translates to 40GB per day during 5 years. In other words if smartctl reads data correctly then you used in last 2 years 24.5 TB writes, or 33.5 GB per day (730 days) which from where I stand can be called a heavy usage.
Does that mean, that 34% of the SSDs lifetime is already over?
Maybe. See above.
... ~6 years is that the normal lifespan of SSDs nowadays?
It can be even shorter with heavier usage. With my usage it will be more likely that smartctl reads something wrong, or something is happening that should not, crating such high number of erase/writes cycles.
Are there other tests for Linux which tell me more about the health of this SSD?
Not that I know, but I'm new SSD owner with no really much experience with SSD. What you can do is to check your partition mount options. Other already elaborated on that, but there is one that is probably missing - "discard". Check as root does your disk support TRIM: hdparm -I /dev/sdX | grep -i TRIM (X is a drive letter) and if yes enable it with option "discard". http://en.opensuse.org/SDB:SSD_discard_%28trim%29_support#Kernel_support Article is a bit old, but even 12.3 does not enable discard by default. Just checked 'mount' and neither root of the file system, not /home have enabled option "discard". How to enable is another question: https://en.opensuse.org/SDB:SSD_performance http://forums.fedoraforum.org/showthread.php?t=277082 which is pretty new, has a lot of advices, but may have some advices that don't apply for newer computer with a lot of memory (about having swap on classic disc).
best regards ME
-- Regards, Rajko. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Jul 6, 2013 at 10:00 PM, Rajko
What you can do is to check your partition mount options. Other already elaborated on that, but there is one that is probably missing - "discard".
Check as root does your disk support TRIM:
hdparm -I /dev/sdX | grep -i TRIM
(X is a drive letter)
and if yes enable it with option "discard".
http://en.opensuse.org/SDB:SSD_discard_%28trim%29_support#Kernel_support
Article is a bit old, but even 12.3 does not enable discard by default. Just checked 'mount' and neither root of the file system, not /home have enabled option "discard".
Kernel "discard" support for ext4 was written years ago and the devs did not know how SSDs would perform. The kernel discard feature for ext4 in particular intersperses trim calls with data calls. From performance testing, it appears a trim call forces a cache flush in the SSD, so the end result is you get performance similar to running with mount sync (ie. poor). Thus, in general this turned out to be a bad design and is not how Windows 7 does it. In general, the better solution is to send a batch of trim commands during low utilization times. That is what win7 does. For ext4 fstrim will do that for you, so what a user should do is use cron to schedule fstrim calls during periods of low usage for their PC. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/06/2013 10:00 PM, Rajko wrote:
tells that 128GB model has 72TB endurance, which translates to 40GB per day during 5 years. In other words if smartctl reads data correctly then you used in last 2 years 24.5 TB writes, or 33.5 GB per day (730 days) which from where I stand can be called a heavy usage.
Even in that "heavy use" case, I do not think it is something to be concerned with as most consumer devices are replaced in average of 4.7 years or so IIRC. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 07/07/13 04:00, schrieb Rajko:
On Sat, 06 Jul 2013 12:48:42 +0200 MarkusGMX
wrote: ...
(rev 1) == 7 0x008 1 34~ Percentage Used Endurance ... manpage says: "ssd - [SCSI] prints the Solid State Media percentage used endurance indicator. A value of 0 indicates as new condition while 100 indicates the device is at the end of its lifetime as projected by the manufacturer. The value may reach 255."
How accurate is this?
Note "projected by the manufacturer" is not defined in any way that will tell us how to understand that 34%. Although:
http://www.crucial.com/pdf/Tech_specs-letter_Crucial_m4_ssd_v3-11-11_online....
tells that 128GB model has 72TB endurance, which translates to 40GB per day during 5 years. In other words if smartctl reads data correctly then you used in last 2 years 24.5 TB writes, or 33.5 GB per day (730 days) which from where I stand can be called a heavy usage.
Hmm, 33.5GB per day cannot be possible. The whole /var has only ~18GB which would mean writing the complete /var approx. twice per day, which surely does not happen. ;-) swap is probably also written more often but most of the time it seems to be not used as top says Swap: 32655M total, 0M used, 32655M free, 6718M cached (with 16GB Memory). /boot is not a problem imho. So either www.crucial.com is wrong or smartctl. Or both. :-) [...]
... ~6 years is that the normal lifespan of SSDs nowadays?
It can be even shorter with heavier usage.
With my usage it will be more likely that smartctl reads something wrong, or something is happening that should not, crating such high number of erase/writes cycles.
If smartctl is correct... 33.5GB per day would be strange for /var and swap . [...]
What you can do is to check your partition mount options. Other already elaborated on that, but there is one that is probably missing - "discard".
Check as root does your disk support TRIM:
hdparm -I /dev/sdX | grep -i TRIM
hdparm -I /dev/sda | grep -i TRIM gives: * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read data after TRIM
(X is a drive letter)
and if yes enable it with option "discard".
http://en.opensuse.org/SDB:SSD_discard_%28trim%29_support#Kernel_support
Complicated. It would be nice to have sections per SuSE release on this webpage: e.g. one section for 12.1, one for 12.2., one for 12.3 ... greg.freemyer@gmail.com wrote in another post that discard is not needed. I had the opinion that SuSE 12.1 handles all that. :-/ Is there some easy switch to enable SSD-optimization in yast2? That would be the easiest way. :-) [...]
How to enable is another question:
Question is now: discard or fstrim ?
http://forums.fedoraforum.org/showthread.php?t=277082
which is pretty new, has a lot of advices, but may have some advices that don't apply for newer computer with a lot of memory (about having swap on classic disc).
I have a lot of RAM but even with that sometimes swapping is indicated and swapspace is also on this SSD. Thanks to all ME -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday, 2013-07-07 at 19:26 +0200, MarkusGMX wrote:
Am 07/07/13 04:00, schrieb Rajko:
...
(rev 1) == 7 0x008 1 34~ Percentage Used Endurance ...
Hmm, 33.5GB per day cannot be possible. The whole /var has only ~18GB which would mean writing the complete /var approx. twice per day, which surely does not happen. ;-)
However... you can not write, say, 100 bytes on flash media, you have to write the entire 32KiB block. That is, read it, modify the 100 bytes you need (in memory) and then write again the 32 KiB block. That makes for a much larger figure.
/boot is not a problem imho. So either www.crucial.com is wrong or smartctl. Or both. :-)
Or our interpretation. The firmware of the disk says "34" out of 100. It doesn't matter what it means exactly, but for practical purposes, replace the disk ASAP when it gets to 99, and have the replacement ready at 95 or earlier. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHZqJcACgkQtTMYHG2NR9U8jwCeK0X6Oaksgp4ZMQPnddqLp+A7 eewAoI37jn0ET5ZaHuihSFJt7OZD9dUO =te7K -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, Jul 7, 2013 at 1:42 PM, Carlos E. R.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sunday, 2013-07-07 at 19:26 +0200, MarkusGMX wrote:
Am 07/07/13 04:00, schrieb Rajko:
...
(rev 1) == 7 0x008 1 34~ Percentage Used Endurance
...
Hmm, 33.5GB per day cannot be possible. The whole /var has only ~18GB which would mean writing the complete /var approx. twice per day, which surely does not happen. ;-)
However... you can not write, say, 100 bytes on flash media, you have to write the entire 32KiB block. That is, read it, modify the 100 bytes you need (in memory) and then write again the 32 KiB block. That makes for a much larger figure.
Carlos, I don't know where your 32 KiB block came from. Seems too big to be a page and too small to be a erase block.
From a physical perspective you have to write an entire erase block (EB) at time. EBs these days are often 2MiB or even bigger and even 3 or 4 years ago were 128KiB or bigger.
From day 1, anything worthy of the name SSD put a mapping layer above
In 10 year old flash designs that meant to modify anything EB size or smaller you did a read/modify/write (RMW) cycle of an entire EB. I suspect that is what you are describing. You need to give the SSD devs some credit, they realized that was a stupid design a long time ago. the flash storage to allow smart data tracking algorithms to avoid the RMW cycle as much as possible. What they do is track the EB's data at a page level. Let's say a page is 4KB. (The linux kernel uses a 4KB page most of the time and so does Windows, so it is the most logical thing for a SSD designer to do as well.) Now the _firmware_ in the SSD controller requires writes to be a full page at a time. If you write a full page, then all the firmware does is put the 4KB in a small cache and invalidate the single page of data in the EB. Note that is very fast and no data write to the flash has even taken place yet. The firmware accumulates data in the cache until there is a full EBs worth, then it grabs an empty EB and write out the contents of the flash. the firmware then updates the page mapping so it knows where to go find that page when a read request comes in. Notice that there are basically no extraneous EB writes. Every time a EB is written to it is with a brand new set of data, no RMW cycles at all. If you write out less than a page worth of data, then a RMW cycle has to take place, but the linux kernel actually does that anyway. ie. The linux block layer works with pages. If you write out data that is smaller than a page in size, then it will implement a RMW cycle of its own just to keep the data handling in the kernel easy. Thus the SSD only sees page size reads and writes with normal filesystem i/o. The end result is that EBs are now able to grow as big as makes sense to the chip designers and the SSD controller implements the page management mapping layer that keeps it all efficient. So that catches you up to 5+ year old designs. The SSD devs said, I can be smarter than that and I've got a little microprocessor handling the mapping anyway, why don't I start adding compression, de-duplication, consolidation of partially used EBs (garbage collection), etc. The reality is that modern SSDs have all of that and more, thus based on the complexity of the mapping algoritym one of the hardest things to do is "forensically wipe" a SSD. I tell my clients that in general it can't be done, they should use physical destruction instead. (For those about to propose a ATA Security Erase, that is implemented in the firmware and several SSDs have had faulty implementations that left the data in place. Basically it can't be trusted as a general solution.) Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-ID:
On Sun, Jul 7, 2013 at 1:42 PM, Carlos E. R. <> wrote:
However... you can not write, say, 100 bytes on flash media, you have to write the entire 32KiB block. That is, read it, modify the 100 bytes you need (in memory) and then write again the 32 KiB block. That makes for a much larger figure.
Carlos,
I don't know where your 32 KiB block came from. Seems too big to be a page and too small to be a erase block.
- From here: https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey Flash memory card design +++·························· FAT optimization Most portable flash media come preformatted as with a FAT32 file system. This is not only done because there is support for this file system in all operating systems, it is actually a reasonably good choice for the media: The data on a FAT32 file system is always written in clusters of e.g. 32 KB, and the media are normally formatted with a cluster size matching the optimum write size, as well as aligning the clusters to the start of internal units, and the access patterns on a FAT32 file system are relatively predictable, alternating between data blocks, file allocation table (FAT) and directories. ··························++- If you go further down the article, to the "List of flash memory cards and their characteristics", you see several of them have a write size of 32K, some 64K, even 256K. Few have less than 32K. In the list there is a section for SSD, and the write sizes are similar.
From a physical perspective you have to write an entire erase block (EB) at time. EBs these days are often 2MiB or even bigger and even 3 or 4 years ago were 128KiB or bigger.
In 10 year old flash designs that meant to modify anything EB size or smaller you did a read/modify/write (RMW) cycle of an entire EB. I suspect that is what you are describing.
You need to give the SSD devs some credit, they realized that was a stupid design a long time ago. ... ...
Ok, what I understand from what you post is that the firmware is clever and does what is needed internally, but it does in fact write a big block at a time to the flash memory. - -- Cheers, Carlos E. R. (from 12.3 x86_64 "Dartmouth" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlHa8jAACgkQtTMYHG2NR9VdSwCeJCFyaaoywcdOBgy1Ts53zUYY 0lgAnix1t0b47j9yvY+4pc4M5rUmPU+m =+L3D -----END PGP SIGNATURE-----
Am 07/07/13 19:42, schrieb Carlos E. R.: [...]
So either www.crucial.com is wrong or smartctl. Or both. :-)
Or our interpretation.
The firmware of the disk says "34" out of 100. It doesn't matter what it means exactly, but for practical purposes, replace the disk ASAP when it gets to 99, and have the replacement ready at 95 or earlier.
I also checked this under Windows 7 with that tool "Crystal Disk Info" with an older version 4.2.0 and this one says that the SSD is ok with 100% with: "Power On Hours : 3465 Std." and "-- S.M.A.R.T. -------------------------------------------------------------- ID Cur Wor Thr RawValues(6) Attribute Name 01 100 100 _50 000000000000 Lesefehlerrate (roh) 05 100 100 _10 000000000000 Wiederzugewiesene Sektoren 09 100 100 __1 000000000D89 Eingeschaltete Stunden 0C 100 100 __1 000000000261 Anzahl der Einschaltungen AA 100 100 _10 000000000000 Wachsende fehlerhafte Blocks AB 100 100 __1 000000000000 Programmfehler AC 100 100 __1 000000000000 Löschfehler AD 100 100 _10 000000000009 Verschleißregulierung AE 100 100 __1 000000000003 Unerwarteter Stromausfall B5 100 100 __1 003400000033 Non-4k angepasster Zugriff B7 100 100 __1 000000000000 Verlangsamung der SATA-Schnittstelle B8 100 100 _50 000000000000 Unbekannt BB 100 100 __1 000000000000 Gemeldete unkorrigierbare Fehler BC 100 100 __1 000000000000 Befehlszeitberschreitung BD 100 100 __1 000000000071 Produktionsbedingte fehlerhafte Blocks C2 100 100 __0 000000000000 Unbekannt C3 100 100 __1 000000000000 Unbekannt C4 100 100 __1 000000000000 Wiederzuweisungsereignisse C5 100 100 __1 000000000000 Aktuell schwebende Sektoren C6 100 100 __1 000000000000 Unkorrigierbare Fehler nach dem Smart Offline Scan C7 100 100 __1 000000000000 UltraDMA CRC Fehler CA 100 100 __1 000000000000 Erreichter Prozentsatz der geschtzten Lebensdauer CE 100 100 __1 000000000000 Schreibfehlerrate" (I am not translating this to English now ;-) ). BR ME -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (8)
-
Billie Walsh
-
Carlos E. R.
-
Cristian Rodríguez
-
Daniel Bauer
-
Greg Freemyer
-
MarkusGMX
-
Patrick Shanahan
-
Rajko