Re: The most non destructive way to prepare a SSD for openSUSE
On 2021/05/16 18:57, -pj wrote:
I was recently informed that using dd (dubbed - data-destroyer) can seriously shorten a SSD devices lifespan.
dd is not a data-destroyer. It is a very useful tool in many circumstances. Of course many tools can be mis-used. Maybe sudo find / -type f -delete or a dos/win fav: format C: Not sure what happens with sudo cat /dev/zero >/dev/rootfs It is handy to have the rootfs so readily identified...*cough*
1. What is the most effective non destructive way to erase or prepare a SSD drive for an openSUSE (LEAP or TDE) installation?
---- Just put it in your drive and create a new partition table. No unnecessary writes.
I am wondering now about USB flash drive devices.
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
---- ???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
3. Does the use of "dd" on a mechanical drive have less of a negative lifespan effect than that of it's effects on SSD or USB devices?
--- Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media. Just don't do unnecessary R/W's on media not designed for it (maybe some flash drives). FWIW, I would tell you I've not had any flash drives/ssd's fail, but to do so might invite unnecessary risk due to some carelessness -- like removing the wrong data disk from a RAID5. I don't know where you got the idea that 'dd' was more dangerous than other i/o, but if you are using to zero out or write other test patterns then you are doing unnecessary I/O -- and that's not the fault of 'dd'. FWIW, dd with the right parameters will give you the fastest r/w speeds. I often use it to test my network R/W speeds with CIFS/SMB... I do frequent i/o speed testing reading + writing on Win from/to two files in my linux home directory. Writing to the devices avoids any physical disk-speed interactions so I just get the protocol transport speed. It varies alot based on how the ethernet connection is tuned. Ishtar:law> /bin/ls -lgG ~/{null,zero} crwxrwxrwx 1 1, 3 Dec 24 2018 /home/law/null crwxrwxrwx 1 1, 5 Jun 15 2015 /home/law/zero which is drive 'H:' on my Win machine: /h> /bin/ls -lgG /h/{null,zero} -rwxrwxrwx 1 0 Dec 24 2018 /h/null -rwxrwxrwx 1 0 Jun 15 2015 /h/zero /h> bin/iotest Using bs=16.0M, count=64, iosize=1.0G R:1073741824 bytes (1.0GB) copied, 2.27302 s, 451MB/s W:1073741824 bytes (1.0GB) copied, 3.02209 s, 339MB/s
On 5/18/21 8:55 AM, L A Walsh wrote:
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
Actually I use dd to overwrite areas at the end of disks that have been used in hardware RAID controllers. The RAID metadata lives up there and it confuses software mdraid if a drive is repurposed. Regards, Lew
On 18/05/2021 17.55, L A Walsh wrote:
On 2021/05/16 18:57, -pj wrote:
...
I am wondering now about USB flash drive devices.
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
3. Does the use of "dd" on a mechanical drive have less of a negative lifespan effect than that of it's effects on SSD or USB devices?
--- Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media.
Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times. I think the optimal size would be a multiple of that chunk size. -- Cheers / Saludos, Carlos E. R. (from oS Leap 15.1 x86_64 (Minas Tirith))
On 18.05.21 19:54, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote:
On 2021/05/16 18:57, -pj wrote:
...
I am wondering now about USB flash drive devices.
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
3. Does the use of "dd" on a mechanical drive have less of a negative lifespan effect than that of it's effects on SSD or USB devices?
--- Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media.
Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
Why should it? There is *one* write() request of 16K which will be turned into one WRITE command of 16K/512 blocks. So each block is written exactly once. Josef -- SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nürnberg Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On 19/05/2021 08.55, Josef Moellers wrote:
On 18.05.21 19:54, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote:
Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media.
Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
Why should it? There is *one* write() request of 16K which will be turned into one WRITE command of 16K/512 blocks. So each block is written exactly once.
If you do: dd if=/dev/zero of=/dev/sdXY count=128 it seems to write in 512 bytes chunks and is quite slower than: dd if=/dev/zero of=/dev/sdXY bs=16K count=4 -- Cheers / Saludos, Carlos E. R. (from oS Leap 15.1 x86_64 (Minas Tirith))
On 19.05.21 11:36, Carlos E. R. wrote:
On 19/05/2021 08.55, Josef Moellers wrote:
On 18.05.21 19:54, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote:
Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media.
Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
Why should it? There is *one* write() request of 16K which will be turned into one WRITE command of 16K/512 blocks. So each block is written exactly once.
If you do:
dd if=/dev/zero of=/dev/sdXY count=128
it seems to write in 512 bytes chunks and is quite slower than:
dd if=/dev/zero of=/dev/sdXY bs=16K count=4
It reads/writes in 512 byte chunks because that's the default: bs=BYTES read and write up to BYTES bytes at a time (default: 512); overrides ibs and obs The performance gain may be due to various factors, the most important to me is that a bs of 512 takes 8 system calls and 8 walks through the IO stack and 16k takes only one. strace has a "-T" optionn, where you can see where the time is actually spent. What's also to consider is 1) My very first dd took quite long, after that all went quite quickly 2) Most of the write()s do not really go to the buffer cache and not directly to the device. Only when the device is close()ed, will the system wait for the data to be actually written. YMMV, obviously Josef -- SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nürnberg Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On 5/18/21 12:54 PM, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote:
On 2021/05/16 18:57, -pj wrote: ...
I am wondering now about USB flash drive devices.
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
3. Does the use of "dd" on a mechanical drive have less of a negative lifespan effect than that of it's effects on SSD or USB devices?
Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media. Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
I think the optimal size would be a multiple of that chunk size.
I would like to thank you all for the excellent suggestions, links and foresight into this topic. I have been unable to respond sooner... I have now been able to put together notes with Kate text editor piecing/documenting links and thoughts about this. One particular comment (without me looking into the links on this "file" which has now been created) the following seems extremely interesting: hdparm can do it. --security-erase PWD Erase (locked) drive, using password PWD (DANGEROUS). Password is given as an ASCII string and is padded with NULs to reach 32 bytes. Use the special password NULL to represent an empty password. The applicable drive password is selected with the --user-master switch (default is "user" password). No other options are permitted on the command line with this one. --security-erase-enhanced PWD Enhanced erase (locked) drive, using password PWD (DANGEROUS). Password is given as an ASCII string and is padded with NULs to reach 32 bytes. The applicable drive password is selected with the --user-master switch (default is "user" password). No other options are permitted on the command line with this one. I have heard of this "hdparm" program before and never looked into it more deeply. So basically now I have alot of leads here. Please feel free to suggest or leave your thoughts if anything more comes to mind. 😁 --Wishes
Have you looked into blkdiscard? It is like fstrim, but it works on entire block devices. NOTE: technically this doesn't promise to wipe the contents, only mark all of the blocks as not caring about the contents. Some SSDs promise that subsequent reads will return all zeros, some do so only lazily (that is, ordering/scheduling of the effect vs read is not guaranteed), some may make no such guarantee at all. It remains the least effort way to prep an SSD for reformatting while producing the least number of writes. As to hdparm, secure erase and the enhanced version aren't supported by all drives (there are ways to check, read the man page). I believe the enhanced version is supposed to clear/reset an encryption key, effectively scrambling the previous contents. I don't recall if this guarantees the disc will show all zeros or no. Probably is, as with discard, implementation defined. Sent from myPhone.
On May 19, 2021, at 3:43 AM, -pj <pj.world@gmx.com> wrote:
On 5/18/21 12:54 PM, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote: On 2021/05/16 18:57, -pj wrote: ...
I am wondering now about USB flash drive devices.
2. Does the use of "dd" on USB flash drives kill the limited lives on these devices in a similar manner also?
???!!... Depends on what you do with "dd". Why would you need or want to use 'dd' unless you want to transfer a disk image. Usually you want to xfer files so 'dd' wouldn't even come into the picture.
3. Does the use of "dd" on a mechanical drive have less of a negative lifespan effect than that of it's effects on SSD or USB devices?
Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media. Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
I think the optimal size would be a multiple of that chunk size.
I would like to thank you all for the excellent suggestions, links and foresight into this topic. I have been unable to respond sooner... I have now been able to put together notes with Kate text editor piecing/documenting links and thoughts about this. One particular comment (without me looking into the links on this "file" which has now been created) the following seems extremely interesting:
hdparm can do it.
--security-erase PWD Erase (locked) drive, using password PWD (DANGEROUS). Password is given as an ASCII string and is padded with NULs to reach 32 bytes. Use the special password NULL to represent an empty password. The applicable drive password is selected with the --user-master switch (default is "user" password). No other options are permitted on the command line with this one.
--security-erase-enhanced PWD Enhanced erase (locked) drive, using password PWD (DANGEROUS). Password is given as an ASCII string and is padded with NULs to reach 32 bytes. The applicable drive password is selected with the --user-master switch (default is "user" password). No other options are permitted on the command line with this one.
I have heard of this "hdparm" program before and never looked into it more deeply. So basically now I have alot of leads here.
Please feel free to suggest or leave your thoughts if anything more comes to mind. 😁
--Wishes
On 5/19/21 3:04 AM, tabris@tabris.net wrote:
Have you looked into blkdiscard? It is like fstrim, but it works on entire block devices. NOTE: technically this doesn't promise to wipe the contents, only mark all of the blocks as not caring about the contents. Some SSDs promise that subsequent reads will return all zeros, some do so only lazily (that is, ordering/scheduling of the effect vs read is not guaranteed), some may make no such guarantee at all. It remains the least effort way to prep an SSD for reformatting while producing the least number of writes.
As to hdparm, secure erase and the enhanced version aren't supported by all drives (there are ways to check, read the man page). I believe the enhanced version is supposed to clear/reset an encryption key, effectively scrambling the previous contents. I don't recall if this guarantees the disc will show all zeros or no. Probably is, as with discard, implementation defined.
Sent from myPhone.
Thank you very much tabris for your input. I will note your comment on this. I have been unable to have much time at my machine this week. Being able to note this will certainly help. --Regards
On 2021/05/19 00:43, -pj wrote:
I would like to thank you all for the excellent suggestions, links and foresight into this topic. I have been unable to respond sooner... I have now been able to put together notes with Kate text editor piecing/documenting links and thoughts about this. One particular comment (without me looking into the links on this "file" which has now been created) the following seems extremely interesting:
hdparm can do it.
--- You realize you are not doing what your subject says....preparing for openSUSE. You don't need to do any special erase or preparation to create a new partition table (for the entire device) and install Opensuse. Creating the new partition table "empties" the entire disk. With a filesystem like xfs (and probably most modern file systems), any disk sector you read in will be displayed as all 0's (whether it is or not). Showing previous contents of a disk sector is a security risk, so most file-systems will display nulls (0's) for anything sectors that are not known to be initialize by the file system. I'm only speaking for xfs, but I think most file systems these days do the same thing.
pj -- ...and then -pj said... % ... % been created) the following seems extremely interesting: % % hdparm can do it. % % --security-erase PWD ... % % --security-erase-enhanced PWD [snip] You mentioned concern about reading and writing memory chips in relation to your quest to "prepare" for SuSE. In addition the fact that to extreme secure-level erasure of any data that may be present is not necessary, writing the multiple times required for such causes MORE wear on the device and hastens its death as much as anything will. Whipping up a fresh partition table is more than enough :-) Have a great evening! :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
On 21/05/2021 00.46, David T-G wrote:
pj --
...and then -pj said... % ... % been created) the following seems extremely interesting: % % hdparm can do it. % % --security-erase PWD ... % % --security-erase-enhanced PWD [snip]
You mentioned concern about reading and writing memory chips in relation to your quest to "prepare" for SuSE. In addition the fact that to extreme secure-level erasure of any data that may be present is not necessary, writing the multiple times required for such causes MORE wear on the device and hastens its death as much as anything will.
Wait. The "security erase" feature mentioned above is not the "paranoid security erase" that some external applications do. It runs only once, and the most important detail is that it is done by the disk firmware, not the host computer CPU. Thus what it actually does depends on the disk manufacturer. The program simply tells the disk to please erase itself completely. What happens is up to the disk, and as far as I know it is fast. This is not what some tools out there claim to do to safe erase a disk by writing several times with zeroes or other patterns.
Whipping up a fresh partition table is more than enough :-)
And the raid metadata if it exists. -- Cheers / Saludos, Carlos E. R. (from oS Leap 15.1 x86_64 (Minas Tirith))
Carlos, et al -- ...and then Carlos E. R. said... % % On 21/05/2021 00.46, David T-G wrote: % > ... % > secure-level erasure of any data that may be present is not necessary, % > writing the multiple times required for such causes MORE wear on the % > device and hastens its death as much as anything will. % % Wait. The "security erase" feature mentioned above is not the "paranoid % security erase" that some external applications do. It runs only once, % and the most important detail is that it is done by the disk firmware, % not the host computer CPU. Thus what it actually does depends on the % disk manufacturer. [snip] Oh! Thanks for the clarification; I didn't realize that. Although I am the type who runs a script to write zeroes (easy), ones (easy with a quick NOT), and randoms (alas, /dev/random takes a while) repeatedly for as long as the machine happens to stay up :-) before it gets taken away, it's good to know the difference. HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
participants (7)
-
-pj
-
Carlos E. R.
-
David T-G
-
Josef Moellers
-
L A Walsh
-
Lew Wolfgang
-
tabris@tabris.net