[opensuse-factory] Adding SSDs?
Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas? Thanks, Tom -- Kindness is a language which the deaf can hear and the blind can read. - Mark Twain ^^ --... ...-- / -.- --. --... -.-. ..-. -.-. ^^^^^ Tom Taylor - retired penguin - KG7CFC AMD Phenom II x4 955 -- 4GB RAM -- 2x1.5TB sata2 openSUSE 13.1_RC2-x86_64 KDE 4.12.1, FF 25.0, claws-mail 3.9.2 registered linux user 263467 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 10 Mar 2014 23:05, Thomas Taylor
Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
As long as you respect the 'provisioning' that SSDs of today still need (keep ca 20% of the raw chip-capacity unpartioned, see manufacturer specs) you will most likely get at least 3 years nonstop use before the wear-levelling has eaten up that unpartioned area. Otherwise? Well, btrfs respects modern SSDs wrt clearing freed-up space, ext4 about the same, have yet to hear negatives on XFS on SSDs. More a matter of personal taste. I'd say that since 12.3 OSS is ready for SSDs. Nothing special on factory so far. - Yamaban. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
I'm loving running Tumbleweed on my Samsung EVO 840 250GB SSD.
Super... fast
I found some of the information on this page useful -
https://sites.google.com/site/easylinuxtipsproject/ssd-in-opensuse
Although i did just add Discard to fstab.
Trent
On 11 March 2014 09:20, Yamaban
On Mon, 10 Mar 2014 23:05, Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
As long as you respect the 'provisioning' that SSDs of today still need (keep ca 20% of the raw chip-capacity unpartioned, see manufacturer specs) you will most likely get at least 3 years nonstop use before the wear-levelling has eaten up that unpartioned area.
Otherwise? Well, btrfs respects modern SSDs wrt clearing freed-up space, ext4 about the same, have yet to hear negatives on XFS on SSDs. More a matter of personal taste.
I'd say that since 12.3 OSS is ready for SSDs. Nothing special on factory so far.
- Yamaban.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, Mar 10, 2014 at 7:20 PM, Yamaban
Otherwise? Well, btrfs respects modern SSDs wrt clearing freed-up space, ext4 about the same, have yet to hear negatives on XFS on SSDs. More a matter of personal taste.
Don't you have to explicitly set the discard flag for it to work, at least in ext4? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 11 Mar 2014 03:07, Claudio Freire
On Mon, Mar 10, 2014 at 7:20 PM, Yamaban
wrote: Otherwise? Well, btrfs respects modern SSDs wrt clearing freed-up space, ext4 about the same, have yet to hear negatives on XFS on SSDs. More a matter of personal taste.
Don't you have to explicitly set the discard flag for it to work, at least in ext4?
Sorry, forgot to mention that: I'm using fstrim to discard via own entry in "/etc/cron/daily/zz_local" see "fstrim --help", it boils down to a line with "ionice -c3 fstrim [mountpoint]" for every ssd-partition. The discard option causes to much lag (for me) in I/O during the delete op. Cron makes it easy to do it during the slow hours (night) Cheers, Yamaban -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Monday, March 10, 2014 23:20:17 Yamaban wrote:
On Mon, 10 Mar 2014 23:05, Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
As long as you respect the 'provisioning' that SSDs of today still need (keep ca 20% of the raw chip-capacity unpartioned, see manufacturer specs) you will most likely get at least 3 years nonstop use before the wear-levelling has eaten up that unpartioned area.
cca 20% of total capacity is reserved by firmware itself, there is no reason whatsoever to partition it differently than regular HDD. If there is not much I/O going on, ie compiling, ssd should survive for a at least 5 years.
Otherwise? Well, btrfs respects modern SSDs wrt clearing freed-up space, ext4 about the same, have yet to hear negatives on XFS on SSDs. More a matter of personal taste.
I'd say that since 12.3 OSS is ready for SSDs. Nothing special on factory so far.
- Yamaban.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, Mar 11, 2014 at 5:03 AM, Jason wrote:
On Monday, March 10, 2014 23:20:17 Yamaban wrote:
On Mon, 10 Mar 2014 23:05, Thomas Taylor wrote:
Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
As long as you respect the 'provisioning' that SSDs of today still need (keep ca 20% of the raw chip-capacity unpartioned, see manufacturer specs) you will most likely get at least 3 years nonstop use before the wear-levelling has eaten up that unpartioned area.
cca 20% of total capacity is reserved by firmware itself, there is no reason whatsoever to partition it differently than regular HDD.
If there is not much I/O going on, ie compiling, ssd should survive for a at least 5 years.
If you're using a recent and good quality SSD, you do not heed to manually overprovision partition at all.. the manufacturer already overprovisions and the firmware takes care of wear leveling. If you do the math, a decent SSD should actually outlive a mechanical drive (an SSD in typical use.. ie not in a data center... should last in excess of 10 years before you start to have issues, and if it's low use, upwards of 50 years is possible... mathematically speaking). A LOT of the information you find online about SSDs is horribly out of date and is/was applicable to 1st generation SSDs. C. -- openSUSE 13.1 x86_64, KDE 4.12 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 10 Mar 2014 15:05:43 -0700
Thomas Taylor
Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks, Tom
Thanks to all for the responses! Will get the new SSD in a few days and am looking forward to much faster boot times. Thanks, Tom -- Kindness is a language which the deaf can hear and the blind can read. - Mark Twain ^^ --... ...-- / -.- --. --... -.-. ..-. -.-. ^^^^ Tom Taylor - retired penguin - KG7CFC AMD Phenom II x4 955 -- 4GB RAM -- 2x1.5TB sata2 openSUSE 13.1_RC2-x86_64 KDE 4.12.1, FF 25.0, claws-mail 3.9.2 registered linux user 263467 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 11 Mar 2014 06:27, Thomas Taylor
On Mon, 10 Mar 2014 15:05:43 -0700 Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks to all for the responses! Will get the new SSD in a few days and am looking forward to much faster boot times.
Thanks, Tom
Please be aware, that most of the preformance boost will only start AFTER the kernel it self has been loaded. The kernel (+ initrd) are loaded by the BIOS / UEFI and there the boost a SSD gives isn't that great difference (< 100MB) But a hibernate and a resume after hibernate is much nicer now with a SSD. (ca 50% size of RAM transfer size, > 1GB) - Yamaban. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, March 11, 2014 06:37:52 Yamaban wrote:
On Tue, 11 Mar 2014 06:27, Thomas Taylor
wrote: On Mon, 10 Mar 2014 15:05:43 -0700 Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks to all for the responses! Will get the new SSD in a few days and am looking forward to much faster boot times.
Thanks, Tom
Please be aware, that most of the preformance boost will only start AFTER the kernel it self has been loaded.
The kernel (+ initrd) are loaded by the BIOS / UEFI and there the boost a SSD gives isn't that great difference (< 100MB)
Seriously? Where are you pulling this from, I'd like to see.
But a hibernate and a resume after hibernate is much nicer now with a SSD. (ca 50% size of RAM transfer size, > 1GB)
- Yamaban.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 11 Mar 2014 07:04, Jason
On Tuesday, March 11, 2014 06:37:52 Yamaban wrote:
Please be aware, that most of the preformance boost will only start AFTER the kernel it self has been loaded.
The kernel (+ initrd) are loaded by the BIOS / UEFI and there the boost a SSD gives isn't that great difference (< 100MB)
Seriously? Where are you pulling this from, I'd like to see.
Q: Fit on history and some basic math? Look up the origanl BIOS Boot-Loader routines (Hints: DOS, asm, int13h) Now look up what a recent BIOS does. Look up UEFI Boot-Loader routines. See the differences? Look into your /boot dir, add the size of your kernel and your initrd. Look at the specs of your disk, continous transfer-rate, take about 60% of that (some UEFI implemtations are a little better and reach 80%). Divide the sum of kernel and initrd by the reduced transferrate. Now you have the pure time it takes to load the kernel and the initrd into memory. If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time. After the kernel is loaded, the BIOS / UEFI will give control to the kernel, which initializes the HW with its drivers. Here comes the kernels own IO routines to play. Now the kernel loads the rest of the OS with its own (optimized) routines. And here most of the Disk I/O of the Boot / Restore process happens. Now the higher transfer-rates and near marginal seek times of a SSD comes fully into play. My first PC was a i386 / 25Mhz / 512kB-RAM / 40MB-ST506-HDD / VGA (640x480/256colors) in spring 1990. After some GW-Basic I learned Assembler. Some where I still have a complete BIOS in ASM laying around. Prove Yourself. Seek Your own answers. - Yamaban. PS: Startpoint for the uninformed: http://en.wikipedia.org/wiki/BIOS -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2014-03-11 08:17 (GMT+0100) Yamaban composed:
My first PC was a i386 / 25Mhz / 512kB-RAM / 40MB-ST506-HDD / VGA (640x480/256colors) in spring 1990.
Close to my first self-built. Whether in Spring or January or February I can't recall. It might have been April, after tax day the 15th. I actually started with a 16MHz 386SX & 4MB RAM, but within mere weeks I sold it to my boss in order to upgrade to 386DX-25 with 8MB @ about $42/MB. My 512MB Trident would do 16 color 800x600 on my 14" NEC Multisync to run Quattro Pro for DOS 30 rows by 132 columns. Also I passed on IDE, instead going to 80MB Seagate SCSI-II on an IN-2000 ISA HBA. The month after the Seagate's warranty expired it took to refusing to spin up without tapping on it to shake the heads loose from the platters. I mostly kept it running to avoid the spin up problem until a long power outage from March of 1993's no-name storm, after which no amount of coaxing would make it spin up any more. I bought a 100MB Quantum for $519 the month the spin up problem developed, so for a couple of years I had 180MB, with 3 partitions on each HD. When the Quantum finally died I got a 200MB Maxtor SCSI for $349, giving me a whopping 300MB total to last me until I got a Pentium 75, 32MB RAM and OS/2 to replace the aging DX, DesqView and DOS 5. :-) -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Felix Miata wrote:
My first PC was a i386 / 25Mhz / 512kB-RAM / 40MB-ST506-HDD / VGA (640x480/256colors) in spring 1990.
Close to my first self-built.
My first computer was an IMSAI 8080 which I bought, in kit form, in Nov. 1976. I then got an XT clone in 1986, then a 386DX in 1991 and few others since then. I currently have a system I bought a few years ago, with an AMD 64 bit CPU and also an Edge 520 ThinkPad. https://en.wikipedia.org/wiki/IMSAI_8080 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Yamaban, On Tuesday, March 11, 2014 08:17:47 Yamaban wrote:
On Tue, 11 Mar 2014 07:04, Jason
wrote: On Tuesday, March 11, 2014 06:37:52 Yamaban wrote:
Please be aware, that most of the preformance boost will only start AFTER the kernel it self has been loaded.
The kernel (+ initrd) are loaded by the BIOS / UEFI and there the boost a SSD gives isn't that great difference (< 100MB)
Seriously? Where are you pulling this from, I'd like to see.
Q: Fit on history and some basic math?
Look up the origanl BIOS Boot-Loader routines (Hints: DOS, asm, int13h)
Now look up what a recent BIOS does. Look up UEFI Boot-Loader routines.
See the differences?
Look into your /boot dir, add the size of your kernel and your initrd.
Look at the specs of your disk, continous transfer-rate, take about 60% of that (some UEFI implemtations are a little better and reach 80%).
Insufficient data to perform calculation. Based on what should I take 60 or 80% percent?
Divide the sum of kernel and initrd by the reduced transferrate.
Now you have the pure time it takes to load the kernel and the initrd into memory.
If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time.
This is simply not true.
After the kernel is loaded, the BIOS / UEFI will give control to the kernel, which initializes the HW with its drivers. Here comes the kernels own IO routines to play.
Now the kernel loads the rest of the OS with its own (optimized) routines.
And here most of the Disk I/O of the Boot / Restore process happens.
No. BIOS/UEFI btl> secondary bootloader> Init> kernel > userspace BIOS/UEFI ctl is relinquished moment it hands over to secondary btl. Exception is in some cases where thermal tables of the _hardware_ can be changed during run time by UEFI and few other bits, but nothing related to actual boot.
Now the higher transfer-rates and near marginal seek times of a SSD comes fully into play.
My first PC was a i386 / 25Mhz / 512kB-RAM / 40MB-ST506-HDD / VGA (640x480/256colors) in spring 1990.
Mine was Amstrad PC 1512. What difference does it make?
After some GW-Basic I learned Assembler. Some where I still have a complete BIOS in ASM laying around.
Prove Yourself. Seek Your own answers.
- Yamaban.
PS: Startpoint for the uninformed: http://en.wikipedia.org/wiki/BIOS
I appreciate you taking the time to follow up, and I'm sorry if I may have offended you previously. But what you're saying simply doesn't hold water and it doesn't really matter who has what as it doesn't pertain in any way to the subject. If we're going down to who is what and what one has or had to prove some point, we might as well leave it at let's agree to disagree:) Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 11 March 2014 11:20, Jason
Divide the sum of kernel and initrd by the reduced transferrate.
Now you have the pure time it takes to load the kernel and the initrd into memory.
If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time.
This is simply not true.
What are you basing that assertion on? I don't have immediate access to an OS machine to confirm, but on the Debian machine I've just checked, the entire contents of /boot comes to 25MB. Even assuming that all of it needs to be read, that will take less than a second, so there simply isn't a second to save. It therefore *must* be true. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, March 11, 2014 12:09:55 Aneurin Price wrote:
On 11 March 2014 11:20, Jason
wrote: Divide the sum of kernel and initrd by the reduced transferrate.
Now you have the pure time it takes to load the kernel and the initrd into memory.
If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time.
This is simply not true.
What are you basing that assertion on? I don't have immediate access to an OS machine to confirm, but on the Debian machine I've just checked, the entire contents of /boot comes to 25MB. Even assuming that all of it needs to be read, that will take less than a second, so there simply isn't a second to save. It therefore *must* be true.
Yes, but /boot is in this case on the SSD. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 11 Mar 2014 12:20, Jason
Hi Yamaban, <snip>
Look into your /boot dir, add the size of your kernel and your initrd.
Look at the specs of your disk, continous transfer-rate, take about 60% of that (some UEFI implemtations are a little better and reach 80%).
Insufficient data to perform calculation. Based on what should I take 60 or 80% percent?
either your disc datasheet (Manfufac.Website->DiscModell->Tech.data) or by using hdparm -t [/dev/your-disc] e.g. /dev/sda, please see "man hdparm" beforehand. "continous transfer-rate" means transfers bejond internal disc-cache, e.g. more than 64MB. "hdparm -t" does that. This transfer-rate is measured with good drivers in a working OS. UEFI / BIOS use "barebones" essential generic dirvers to access the disc and thus reach at max 60% (most BIOS, early UEFI) to 80% (best BIOS, some recent UEFI) of that maxima.
Divide the sum of kernel and initrd by the reduced transferrate.
Now you have the pure time it takes to load the kernel and the initrd into memory.
If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time.
This is simply not true.
I've had a 3.5" HDD with 30 MB/s and now a SSD with 120 MB/s. the time from end-of-grub to begin-kernel-init didn't shink to 25% (theoretical) but reached 30% (still good, IO saturation) At a kernel of 4.5 MB and a initrd of 10.5 MB that makes: 15 MB / 30MB/s = 0.5 sec 15 MB / 100MB/s = 0.15 sec "BIG" difference. This is the time it takes for the boot-loader (grub/grub2/lilo/gummiboot/syslinux/etc) to load the kernel and its initrd into memory. Then the boot-loader starts / executes the kernel.
After the kernel is loaded, the BIOS / UEFI will give control to the kernel, which initializes the HW with its drivers. Here comes the kernels own IO routines to play.
Now the kernel loads the rest of the OS with its own (optimized) routines.
And here most of the Disk I/O of the Boot / Restore process happens.
Restore after hibernate happens after the kernel initalises all the hardware, then the 'hibernate' image is loaded from the swap partition. In most cases the size of this image is about 1/4 to 1/2 of the size of the RAM. 4BG RAM => 1GB to 2GB to transfer from disc into memory Sample for 1GB: 1000MB / 30MB/s = 33.33 sec 1000MB / 100MB/s = 10 sec Now, this is really felt and experienced difference. For a 'normal' / full boot the summed size of all transfers is about the same, but more spread out and mixed in between the start-up of all the different services needed. Still, the most 'win' a SSD brings in full boot is the missing seek-times due to having no moving parts.
No.
BIOS/UEFI btl> secondary bootloader> Init> kernel > userspace
Please correct this in Your notes: BIOS/UEFI btl> secondary bootloader> (load kernel+initrd into mem) > start kernel > Init > deamons (userspace) > Ready-to-use
BIOS/UEFI ctl is relinquished moment it hands over to secondary btl. Exception is in some cases where thermal tables of the _hardware_ can be changed during run time by UEFI and few other bits, but nothing related to actual boot.
This differs from model to model. For some models the BIOS / UEFI data is discarded, and later reloaded by the BIOS / UEFI kernel-driver. For other the "secondary bootloader" has to hand trough the data, or the kernel could not start correctly (later reload fails, mostly older model, or 'mobiles') In the end what I'm trying to communicate is: The most speed-up a SSD vs a HDD gives, is after the kernel and the initrd it self is loaded. And this is proven. - Yamaban. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, March 11, 2014 13:27:25 Yamaban wrote:
On Tue, 11 Mar 2014 12:20, Jason
wrote: Hi Yamaban,
<snip>
Look into your /boot dir, add the size of your kernel and your initrd.
Look at the specs of your disk, continous transfer-rate, take about 60% of that (some UEFI implemtations are a little better and reach 80%).
Insufficient data to perform calculation. Based on what should I take 60 or 80% percent?
either your disc datasheet (Manfufac.Website->DiscModell->Tech.data) or by using hdparm -t [/dev/your-disc] e.g. /dev/sda, please see "man hdparm" beforehand.
"continous transfer-rate" means transfers bejond internal disc-cache, e.g. more than 64MB. "hdparm -t" does that.
This transfer-rate is measured with good drivers in a working OS. UEFI / BIOS use "barebones" essential generic dirvers to access the disc and thus reach at max 60% (most BIOS, early UEFI) to 80% (best BIOS, some recent UEFI) of that maxima.
the only thing I see here affecting the boot is what driver is in use, IDE, AHCI etc, but that would affect the whole of system.
Divide the sum of kernel and initrd by the reduced transferrate.
Now you have the pure time it takes to load the kernel and the initrd into memory.
If you take a older Laptop HDD (27 MB/s) and a newer SSD (150MB/s) the difference looks big, but the size of the kernel+initrd is not that big that the difference will make more than one (1) second of boot time.
This is simply not true.
I've had a 3.5" HDD with 30 MB/s and now a SSD with 120 MB/s. the time from end-of-grub to begin-kernel-init didn't shink to 25% (theoretical) but reached 30% (still good, IO saturation)
At a kernel of 4.5 MB and a initrd of 10.5 MB that makes:
15 MB / 30MB/s = 0.5 sec 15 MB / 100MB/s = 0.15 sec
"BIG" difference. This is the time it takes for the boot-loader (grub/grub2/lilo/gummiboot/syslinux/etc) to load the kernel and its initrd into memory.
Then the boot-loader starts / executes the kernel.
Ok, i think I see where is the problem in this discussion.
After the kernel is loaded, the BIOS / UEFI will give control to the kernel, which initializes the HW with its drivers. Here comes the kernels own IO routines to play.
Now the kernel loads the rest of the OS with its own (optimized) routines.
And here most of the Disk I/O of the Boot / Restore process happens.
Restore after hibernate happens after the kernel initalises all the hardware, then the 'hibernate' image is loaded from the swap partition. In most cases the size of this image is about 1/4 to 1/2 of the size of the RAM. 4BG RAM => 1GB to 2GB to transfer from disc into memory Sample for 1GB:
1000MB / 30MB/s = 33.33 sec 1000MB / 100MB/s = 10 sec
Now, this is really felt and experienced difference.
I have misunderstood.
For a 'normal' / full boot the summed size of all transfers is about the same, but more spread out and mixed in between the start-up of all the different services needed.
Still, the most 'win' a SSD brings in full boot is the missing seek-times due to having no moving parts.
No.
BIOS/UEFI btl> secondary bootloader> Init> kernel > userspace
Please correct this in Your notes:
BIOS/UEFI btl> secondary bootloader> (load kernel+initrd into mem) > start kernel > Init > deamons (userspace) > Ready-to-use
Ok. I knew that but somehow misunderstood at some point.
BIOS/UEFI ctl is relinquished moment it hands over to secondary btl. Exception is in some cases where thermal tables of the _hardware_ can be changed during run time by UEFI and few other bits, but nothing related to actual boot.
This differs from model to model. For some models the BIOS / UEFI data is discarded, and later reloaded by the BIOS / UEFI kernel-driver. For other the "secondary bootloader" has to hand trough the data, or the kernel could not start correctly (later reload fails, mostly older model, or 'mobiles')
In the end what I'm trying to communicate is:
The most speed-up a SSD vs a HDD gives, is after the kernel and the initrd it self is loaded. And this is proven.
Agreed, I took it as actual full system boot somehow (up until userspace), not that you're talking about kernel unpacking/transfer to RAM. That said, it is really negligible what you pointed out. I'm using SSDs since 5yrs ago and couldn't go back to HDD at this point. The actuall boot got from 45 sec to 15 sec to usable desktop, depending on the machine. Therefore my inital comment. I'm sorry for the confusion. To extend, I might have approached this topic with you with prejudice that was based on 20% non-partitioned space and daily chron job for trimming:) So I'm sorry for that too.
- Yamaban.
Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Tom, On Monday, March 10, 2014 22:27:52 Thomas Taylor wrote:
On Mon, 10 Mar 2014 15:05:43 -0700
Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks, Tom
Thanks to all for the responses! Will get the new SSD in a few days and am looking forward to much faster boot times.
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw. That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough. https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization I have units that are 5 yrs old now and still perform as new with few bad sectors (where that 20% of extra storage comes into play). Anecdotal, so take it with a grain of salt.
Thanks, Tom
Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, March 11, 2014 13:41:29 Jason wrote:
Hi Tom,
On Monday, March 10, 2014 22:27:52 Thomas Taylor wrote:
On Mon, 10 Mar 2014 15:05:43 -0700
Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks, Tom
Thanks to all for the responses! Will get the new SSD in a few days and am looking forward to much faster boot times.
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw.
That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough.
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
I have units that are 5 yrs old now and still perform as new with few bad sectors (where that 20% of extra storage comes into play). Anecdotal, so take it with a grain of salt.
Small correction: It isn't 20% but in the 5% ballpark.
Thanks, Tom
Kind regards, Jason
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2014-03-11 at 13:41 +0800, Jason wrote:
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw.
That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough.
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much? Couple of years ago i replaced a normal hdd with a 30GB sdd, and installed the distro on it. However, the swap certainly killed the sdd. Even reading above links doesn't make me feel better, quoting: "One can place a swap partition on an SSD. Most modern desktops with an excess of 2 Gigs of memory rarely use swap at all. The notable exception is systems which make use of the hibernate feature. The following is a recommended tweak for SSDs using a swap partition that will reduce the swappiness of the system thus avoiding writes to swap: # echo 1 > /proc/sys/vm/swappiness" So what they write is: Yes you can put swap on an sdd, but do not forget to disable swapping. Not very helpful. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wed, 12 Mar 2014 23:06, Hans Witvliet
Has ssd quality improved that much?
Couple of years ago i replaced a normal hdd with a 30GB sdd, and installed the distro on it. However, the swap certainly killed the sdd. <snip>
Well, lets dissect that. What makes a SSD? 1. The Flash-Memory (mostly stacked Dies per caseing) 2. The Controller (A microprocessor, some RAM, IO amplifiers) To 1. the flash: I would not talk about rising quality, but more stable quality. Rising quality would mean: a.) longer storage cycle without power (atm ca 5-7 years) b.) more write cycles before failure. (differs per type of flash, but not really rising in the last 5 years) Stable quality means: the failure-rates a known, and can be taken into account by making (over-)provisions for the failures To 2. the controller: Here have been made great steps, both in the controllers themself and in the firmware they use. At the start of the SSDs, the controllers where a little more than blownup USB stick controllers with a SATA instead of a USB interface. Now the controllers are a specially developed for the needs of a 'in the computer' storage device. With write-cycle spreading, silent failing block exchange, full smart-support. Still, there are big differences in the SSD in the market atm. Some are developed for the needs of mobile devices, others are optimised for superfast database use. The 'desktop / laptop' models are somewhere in the middle of that. Failure in early model SSD where more the rule than the exception. The quality before 2010 was not that good at all. 2010/2011 marks a change in that. The contoller made big leaps in that time, and have steadily improved since then. Similar with the flash itself, see above. With the big run on SSDs, more R&D where invested to fill the now better known needs and requirements of flash for SSDs. Since about mid 2013 a stable phase has been reached, at least in terms of the technology itself. Similar has happend to magnetic HDDs 1998/1999 (GMR), still 2005/6, a change (PMR/TMR) made waves. We will see what happens. - Yamaban. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Hans, On Wednesday, March 12, 2014 23:06:09 Hans Witvliet wrote:
On Tue, 2014-03-11 at 13:41 +0800, Jason wrote:
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw.
That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough.
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much?
Couple of years ago i replaced a normal hdd with a 30GB sdd, and installed the distro on it. However, the swap certainly killed the sdd.
What makes you think swap killed it and what brand was it? Usually the way it fails tells you what happened. If you started to have lots of freezes, corrupted data, rw errors, this are cells deteriorating. If it failed suddenly, it is most likely the controller itself that crapped itself. Yamaban already gave hw review, but to add: You want to go with the name brand, same as with HDDs. It's anecdotal, but Intel's ssds are reliable as long as the models are equipped with their controller. Models with I think sandforce controller have higher failure rate, but still nothing to worry about. Also, Samsung seems to be good these days, they probably learned their lesson first time around. Toshiba is also good. Most of the SSDs use Toshiba's memory so one shouldn't go wrong with it. At all costs avoid OEM solutions, you want to have clear fw update path and proper support. Also, lower sizes, like 30GB or so, are much slower and usually used only as caching drives. You don't really want to go below 80GB.
Even reading above links doesn't make me feel better, quoting:
"One can place a swap partition on an SSD. Most modern desktops with an excess of 2 Gigs of memory rarely use swap at all. The notable exception is systems which make use of the hibernate feature. The following is a recommended tweak for SSDs using a swap partition that will reduce the swappiness of the system thus avoiding writes to swap: # echo 1 > /proc/sys/vm/swappiness"
So what they write is: Yes you can put swap on an sdd, but do not forget to disable swapping. Not very helpful.
As I said, pick out a balanced approach, most of the writeups are overkill. In my case, starting with max 4GB RAM limitation on laptops, swap is disabled on my machines so I can't give you feedback on that and I'm not adjusting anything from the system side except the initial setup to start with, btrfs and noop elevator. What I can tell you though is that I'm not treating them any different than a regular drive and I abuse the fact they're fast:) What I provided is all anecdotal evidence though. Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Mar 13, 2014 at 4:37 PM, Jason wrote:
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much?
[snip]
Also, Samsung seems to be good these days, they probably learned their lesson first time around. Toshiba is also good. Most of the SSDs use Toshiba's memory so one shouldn't go wrong with it.
I'm using Samsung 840 Pro 256Gb SSDs, and they are really holding up nicely (great MTBF rating, high IOPS, high read/write rating). Very fast, and reliable. So far not a single failure or error report (in a laptop and two desktops) with normal/heavy use, but nothing approaching datacantre use. The two desktops are on 24x7, have swap on the SSD (although the swap is never written to because the desktops have 8 and 16Gb RAM and never need swap). The only "tweaks" to the SSD config is to add "discard" to the fstab lines.
At all costs avoid OEM solutions, you want to have clear fw update path and proper support.
This is REALLY important in my experience. The OEM and no-name-brand/low cost budget SSDs that you can get aren't worth it. I've had those fail really fast... controllers stopped working, corrupted data etc.
As I said, pick out a balanced approach, most of the writeups are overkill.
And out of date, reflecting the facts as the were in 2008/2009.
What I can tell you though is that I'm not treating them any different than a regular drive and I abuse the fact they're fast:) What I provided is all anecdotal evidence though.
Same here. The SSD is treated as a normal drive in all systems I work with. I don't play conservative with read/writes... I install Linux distros regularly on the laptop (mainly testing, experimenting with oS Factory builds etc). Anecdotal evidence is the best kind... right? :-) C. -- openSUSE 13.1 x86_64, KDE 4.12 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday, March 13, 2014 07:49:35 C wrote:
On Thu, Mar 13, 2014 at 4:37 PM, Jason wrote:
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much?
[snip]
Also, Samsung seems to be good these days, they probably learned their lesson first time around. Toshiba is also good. Most of the SSDs use Toshiba's memory so one shouldn't go wrong with it.
I'm using Samsung 840 Pro 256Gb SSDs, and they are really holding up nicely (great MTBF rating, high IOPS, high read/write rating). Very fast, and reliable. So far not a single failure or error report (in a laptop and two desktops) with normal/heavy use, but nothing approaching datacantre use. The two desktops are on 24x7, have swap on the SSD (although the swap is never written to because the desktops have 8 and 16Gb RAM and never need swap). The only "tweaks" to the SSD config is to add "discard" to the fstab lines.
Thanks for letting me know, next time around I will include it. At the time I bought Intel drives Samsung was in the middle of 'bad press' as trim wasn't supported and there was no way to update the firmware. As for the discard, few people have told me in the past few days not to use it and considering they know what they're talking about I'm not questioning it so I have personally removed it. Sensible flags for btrfs on ssd are noatime,autodefrag,compress=lzo and that is coming from a person who works on btrfs.
At all costs avoid OEM solutions, you want to have clear fw update path and proper support.
This is REALLY important in my experience. The OEM and no-name-brand/low cost budget SSDs that you can get aren't worth it. I've had those fail really fast... controllers stopped working, corrupted data etc.
As I said, pick out a balanced approach, most of the writeups are overkill.
And out of date, reflecting the facts as the were in 2008/2009.
Yes. But SSDs haven't really changed that much, the basics are still the same and reliability went up and down and up again, depending on the time. I have to add here my 'rule of thumb' re: fw updates. I found that is best to update at least a month _after_ latest release and to follow up on update every time without skipping. btw, this were the articles that 'sold' me on ssd: http://www.anandtech.com/show/2738 http://www.anandtech.com/show/2829 Old, but still valid.
What I can tell you though is that I'm not treating them any different than a regular drive and I abuse the fact they're fast:) What I provided is all anecdotal evidence though.
Same here. The SSD is treated as a normal drive in all systems I work with. I don't play conservative with read/writes... I install Linux distros regularly on the laptop (mainly testing, experimenting with oS Factory builds etc).
Anecdotal evidence is the best kind... right? :-)
:)
C.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Mar 13, 2014 at 9:26 PM, Jason
As for the discard, few people have told me in the past few days not to use it and considering they know what they're talking about I'm not questioning it so I have personally removed it.
The only reason I can think of that someone would suggest not using discard is that it can impact the speed of deleting content off the drive. https://patrick-nagel.net/blog/archives/337 Basically, it depends on your use case for the data you're working with if discard is something you don't want to use. For me, using "discard" has a negligible impact - I don't see the difference in my day-to-day use, and it's an easy solution vs setting up the cron job. Using discard... you need kernel 2.6.33 or higher and an SSD that supports trim (ie any modern drive should do this). Using fstrim on a cron job.. is just as effective.
Sensible flags for btrfs on ssd are noatime,autodefrag,compress=lzo and that is coming from a person who works on btrfs.
There is disagreement about noatime... many suggest it's a lot better to use relatime. If you use noatime, you can negatively impact some applications that depend on the into - mutt is a prime example.
Yes. But SSDs haven't really changed that much, the basics are still the same and reliability went up and down and up again, depending on the time.
Ah, but they have. The firmware is vastly different and the the types and quantity of nand chips in use have also changed (eg older MMC nand vs newer TLC nand) C. -- openSUSE 13.1 x86_64, KDE 4.12 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
C
On Thu, Mar 13, 2014 at 9:26 PM, Jason
wrote: As for the discard, few people have told me in the past few days not to use it and considering they know what they're talking about I'm not questioning it so I have personally removed it.
The only reason I can think of that someone would suggest not using discard is that it can impact the speed of deleting content off the drive. https://patrick-nagel.net/blog/archives/337
That blog post is directly on point for why I recommended fstrim over discard. As I put in an earlier email, until recently all SSDs implemented trim as a non-queue-able command, so every trim command triggered a internal drive cache flush. Per that blog post that caused a 40x increase in time to delete an unpacked kernel tree. Ie. It makes ext4 perform like xfs of 3 years ago. Newer SSDs may have finally implemented trim as a queue-able command and thus the cache flush is no longer done. For them discard should be a fine option, but I've seen no benchmarks. Unfortunately I don't know how to know which way a SSD has the trim command implemented. Maybe do what the blog author did and test untar'ing and deleting a kernel tarball. Be sure to include the time for sync after rm -r. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Mar 13, 2014 at 1:10 PM, Greg Freemyer wrote:
C
wrote: On Thu, Mar 13, 2014 at 9:26 PM, Jason
wrote: As for the discard, few people have told me in the past few days not to use it and considering they know what they're talking about I'm not questioning it so I have personally removed it.
The only reason I can think of that someone would suggest not using discard is that it can impact the speed of deleting content off the drive. https://patrick-nagel.net/blog/archives/337
That blog post is directly on point for why I recommended fstrim over discard.
As I put in an earlier email, until recently all SSDs implemented trim as a non-queue-able command, so every trim command triggered a internal drive cache flush. Per that blog post that caused a 40x increase in time to delete an unpacked kernel tree. Ie. It makes ext4 perform like xfs of 3 years ago.
Newer SSDs may have finally implemented trim as a queue-able command and thus the cache flush is no longer done. For them discard should be a fine option, but I've seen no benchmarks.
Unfortunately I don't know how to know which way a SSD has the trim command implemented. Maybe do what the blog author did and test untar'ing and deleting a kernel tarball. Be sure to include the time for sync after rm -r.
I'm willing to time test the discard thing on my system (on the Samsung 840 Pro)... but it will have to wait for at least 3 weeks (travelling starting pretty much.. now). I haven't noticed any speed impact, but then again, I don't usually untar and then delete the kernel source (or do similar large tasks). C. -- openSUSE 13.1 x86_64, KDE 4.12 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 03/13/2014 01:10 PM, Greg Freemyer wrote:
That blog post is directly on point for why I recommended fstrim over discard.
BTW: Karel added 'fstrim --all' option recently [0] which will go into v2.25 some day; it has been backported to v2.24.1 [1]. [0] http://git.kernel.org/cgit/utils/util-linux/util-linux.git/commit/?id=36c370... [1] http://marc.info/?l=util-linux-ng&m=139023224926976 Have a nice day, Berny -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday, March 13, 2014 10:06:08 you wrote:
On Thu, Mar 13, 2014 at 9:26 PM, Jason
wrote: As for the discard, few people have told me in the past few days not to use it and considering they know what they're talking about I'm not questioning it so I have personally removed it.
The only reason I can think of that someone would suggest not using discard is that it can impact the speed of deleting content off the drive. https://patrick-nagel.net/blog/archives/337 Basically, it depends on your use case for the data you're working with if discard is something you don't want to use. For me, using "discard" has a negligible impact - I don't see the difference in my day-to-day use, and it's an easy solution vs setting up the cron job.
Using discard... you need kernel 2.6.33 or higher and an SSD that supports trim (ie any modern drive should do this).
Using fstrim on a cron job.. is just as effective.
Sensible flags for btrfs on ssd are noatime,autodefrag,compress=lzo and that is coming from a person who works on btrfs.
There is disagreement about noatime... many suggest it's a lot better to use relatime. If you use noatime, you can negatively impact some applications that depend on the into - mutt is a prime example.
Hmm, maybe I was too definitive. wrt to flags, this is on _my_ personal machines relevant to their use cases. I do find writing access times as unnecessary overhead and it hasn't bitten me. (yet) But yes, relatime is a better solution in general. Thank you for expanding, it is useful info.
Yes. But SSDs haven't really changed that much, the basics are still the same and reliability went up and down and up again, depending on the time. Ah, but they have. The firmware is vastly different and the the types and quantity of nand chips in use have also changed (eg older MMC nand vs newer TLC nand)
Not that _much_ different from 2nd gen onward, the methodology is still the same, only algoritms and operations have been honed further so the articles do apply despite their age. Though I haven't dissected nor would I know how to:) As for TLC, I'm not _yet_ convinced. The finesse required to deal with so many gates (voltages) and the inherent latency is something I find hard to believe it will _just work_ for say at least 5 years. There is a reason SLC is server grade and MLC is widely adopted for consumer drives, TLC is (currently) simply a method to drive the price down. Just IMHO.
C.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Mar 13, 2014 at 11:25 PM, Jason
As for TLC, I'm not _yet_ convinced. The finesse required to deal with so many gates (voltages) and the inherent latency is something I find hard to believe it will _just work_ for say at least 5 years.
There is a reason SLC is server grade and MLC is widely adopted for consumer drives, TLC is (currently) simply a method to drive the price down. Just IMHO.
That's why I opted for the Samsung 840 Pro over Evo. The Pro has MLC, and the Evo is TLC... or at least that's what I used to justify the slightly higher price (wasn't much more). Errr... this discussion has drifted quite far from Factory related stuff. I'll stop here now. If we want to keep chattering about SSDs, we should prob take to to the main list... also it would be nice if there was some summary or update to the openSUSE info on SSDs to reflect the current situation... maybe that can be distilled from everyone's contributions here? C. -- openSUSE 13.1 x86_64, KDE 4.12 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday, March 13, 2014 11:29:52 C wrote:
On Thu, Mar 13, 2014 at 11:25 PM, Jason
wrote: As for TLC, I'm not _yet_ convinced. The finesse required to deal with so many gates (voltages) and the inherent latency is something I find hard to believe it will _just work_ for say at least 5 years.
There is a reason SLC is server grade and MLC is widely adopted for consumer drives, TLC is (currently) simply a method to drive the price down. Just IMHO. That's why I opted for the Samsung 840 Pro over Evo. The Pro has MLC, and the Evo is TLC... or at least that's what I used to justify the slightly higher price (wasn't much more).
Errr... this discussion has drifted quite far from Factory related stuff. I'll stop here now. If we want to keep chattering about SSDs, we should prob take to to the main list... also it would be nice if there was some summary or update to the openSUSE info on SSDs to reflect the current situation... maybe that can be distilled from everyone's contributions here?
True, interesting topic:) And lots of good info! AFAIK Greg mentioned he will be updating wiki pages, in case he's constrained I might do it as well. Is there any special authorization required for editing? Haven't done any edits there yet.
C.
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Jason
On Thu, Mar 13, 2014 at 11:25 PM, Jason
wrote: As for TLC, I'm not _yet_ convinced. The finesse required to deal with so many gates (voltages) and the inherent latency is something I find hard to believe it will _just work_ for say at least 5 years.
There is a reason SLC is server grade and MLC is widely adopted for consumer drives, TLC is (currently) simply a method to drive the
On Thursday, March 13, 2014 11:29:52 C wrote: price
down. Just IMHO. That's why I opted for the Samsung 840 Pro over Evo. The Pro has MLC, and the Evo is TLC... or at least that's what I used to justify the slightly higher price (wasn't much more).
Errr... this discussion has drifted quite far from Factory related stuff. I'll stop here now. If we want to keep chattering about SSDs, we should prob take to to the main list... also it would be nice if there was some summary or update to the openSUSE info on SSDs to reflect the current situation... maybe that can be distilled from everyone's contributions here?
True, interesting topic:) And lots of good info! AFAIK Greg mentioned he will be updating wiki pages, in case he's constrained I might do it as well. Is there any special authorization required for editing? Haven't done any edits there yet.
No special authorization needed. The same account that lets you file bugzillas will let you edit most wiki pages. Your edits will be reviewed by someone on the wiki page before they are accepted. Please feel free to update the wiki page before me. As of 12 hours ago, I am swamped with my job for at least a week. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday, March 13, 2014 07:44:21 Greg Freemyer wrote:
Jason
wrote: On Thursday, March 13, 2014 11:29:52 C wrote:
On Thu, Mar 13, 2014 at 11:25 PM, Jason
wrote: As for TLC, I'm not _yet_ convinced. The finesse required to deal
with so
many gates (voltages) and the inherent latency is something I find
hard
to believe it will _just work_ for say at least 5 years.
There is a reason SLC is server grade and MLC is widely adopted for consumer drives, TLC is (currently) simply a method to drive the
price
down. Just IMHO.
That's why I opted for the Samsung 840 Pro over Evo. The Pro has MLC, and the Evo is TLC... or at least that's what I used to justify the slightly higher price (wasn't much more).
Errr... this discussion has drifted quite far from Factory related stuff. I'll stop here now. If we want to keep chattering about
SSDs,
we should prob take to to the main list... also it would be nice if there was some summary or update to the openSUSE info on SSDs to reflect the current situation... maybe that can be distilled from everyone's contributions here?
True, interesting topic:) And lots of good info! AFAIK Greg mentioned he will be updating wiki pages, in case he's constrained I might do it as well. Is there any special authorization required for editing? Haven't done any edits there yet.
No special authorization needed. The same account that lets you file bugzillas will let you edit most wiki pages. Your edits will be reviewed by someone on the wiki page before they are accepted.
Please feel free to update the wiki page before me. As of 12 hours ago, I am swamped with my job for at least a week.
Ok. Will update status here when applicable. Let's keep in touch so there's no potential overlap.
Greg
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 2014-03-13 at 11:37 -0400, Jason wrote:
Hi Hans,
On Wednesday, March 12, 2014 23:06:09 Hans Witvliet wrote:
On Tue, 2014-03-11 at 13:41 +0800, Jason wrote:
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw.
That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough.
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much?
Couple of years ago i replaced a normal hdd with a 30GB sdd, and installed the distro on it. However, the swap certainly killed the sdd.
What makes you think swap killed it and what brand was it? Usually the way it fails tells you what happened. If you started to have lots of freezes, corrupted data, rw errors, this are cells deteriorating. If it failed suddenly, it is most likely the controller itself that crapped itself.
No, it wasn't that. Seen that behaviour on too many USB-drives The system used to have little mem (just 2GB). It started with read errors on the device, first once a week, (i noticed but gave little attention). Later on became more and more, until a dayly log-rotate was filled up with it. I'll guess one of the latest thing i did was a system update, followed by an complete distro upgrade. After that failed, i tried a freash (from iso) install, but i could not put a file system on the disk anymore.... So, i guess with moders sdd, the best choice is to get the biggest you can, so that the wear can be spread among a far more greater number of cells. (And i would still keep the "safety-lane" swap-area on a dedicated traditional hdd) btw, i just noticed that i paid more for my 30GB drive than i would have to pay now for a 1TB SSD-drive. Time flies, just like money. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday, March 13, 2014 09:01:13 Hans Witvliet wrote:
On Thu, 2014-03-13 at 11:37 -0400, Jason wrote:
Hi Hans,
On Wednesday, March 12, 2014 23:06:09 Hans Witvliet wrote:
On Tue, 2014-03-11 at 13:41 +0800, Jason wrote:
Run them as you'd normally do, there's no need to complicate things. It's state of the art technology and everything is basically done for you by the fw.
That said, you should read these few links[1] and balance it out basically. Alignment is what is most important when setting up the partitions for life and performance. Other than that, ext4 mount flags and mindful use of high I/O operations is enough.
https://wiki.archlinux.org/index.php/Solid_State_Drives https://wiki.debian.org/SSDOptimization
Has ssd quality improved that much?
Couple of years ago i replaced a normal hdd with a 30GB sdd, and installed the distro on it. However, the swap certainly killed the sdd.
What makes you think swap killed it and what brand was it? Usually the way it fails tells you what happened. If you started to have lots of freezes, corrupted data, rw errors, this are cells deteriorating. If it failed suddenly, it is most likely the controller itself that crapped itself. No, it wasn't that. Seen that behaviour on too many USB-drives
The system used to have little mem (just 2GB). It started with read errors on the device, first once a week, (i noticed but gave little attention). Later on became more and more, until a dayly log-rotate was filled up with it. I'll guess one of the latest thing i did was a system update, followed by an complete distro upgrade. After that failed, i tried a freash (from iso) install, but i could not put a file system on the disk anymore....
Cell dying off. This is actually most favorable failure. Intel will for example enter ro mode if that starts to happen so you can clone the drive if needed.
So, i guess with moders sdd, the best choice is to get the biggest you can, so that the wear can be spread among a far more greater number of cells. (And i would still keep the "safety-lane" swap-area on a dedicated traditional hdd)
Not necessarily biggest, but at minimum 80GB if used as a main drive. If there is lot of swapping going around it defeats the purpose of SSD. It really isn't an issue to have it on SSD but that is just my 2c.
btw, i just noticed that i paid more for my 30GB drive than i would have to pay now for a 1TB SSD-drive. Time flies, just like money.
Indeed. Intel X25M was a kidney and then some:) That said, it is really without comparison with 'classic' HDD as a main drive. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, Mar 10, 2014 at 6:05 PM, Thomas Taylor
Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks, Tom
The recommendations about using discard did not provide any background to let you make the right choice. Here's some background info that should help: ============ openSSUE has been ready for SSDs since 11.4, so no features from factory are needed. This background wiki page is mostly from 2010. It would be great if knowledgeable people would update it, as it has gotten a little out of date: https://en.opensuse.org/SDB:SSD_discard_(trim)_support To the best of my knowledge it is still a good SSD "trim" primer, so please take a minute to read it and understand the trim technology. Note that it says "As of 11.4, fstrim is part of the linux-util package and is the recommended choice for invoking trim for most users." The incorporation of fstrim into 11.4 was the point I claim opensuse was fully SSD ready. Also note that it is not stated on that page, but "trim" is an ignorable command by the SSD. if it chooses for whatever reason, it can ignore trim commans. Thus trim is a highly unreliable command. == ext4 only below, because I don't know the facts for XFS or BTRFS == realtime discards issued by ext4 when files are deleted may be a performance boost for some drives, but it does not mean you should not also schedule batched discards on a regular basis. fstrim in particular will walk an ext4 filesystem and send out a trim command for all unallocated blocks in the filesystem. Calling fstrim nightly will ensure any ignored trims eventually get acted on by the SSD. As a user you have two choices with ext4, add a discharge arg to mount (via fstab) which will cause realtime trims. And/or call fstrim via cron on a regular basis. It is only in the last year or so that "trim" is an asynchronous / queue-able command to some drives. For all others, it is a synchronous, non-queue-able command. For the majority of the installed SSDs, that means it is synchronous. I don't know how to tell which way a given drive handles trim commands, so I would default assume any drive I was installing was synchronous. For SSDs that implement trim as a synchronous command: - realtime discards by the filesystem driver mean every trim command causes a cache flush on the drive itself. This has proven to mean a loss of performance, not a gain. Thus for the majority of SSDs batched discard via fstrim is highly preferred as it is schedulable event and only takes 30 seconds or so to complete in most circumstances. For SSDs that implement trim as a asynchronous command: - This means the drives have finally reached the maturity level assumed by the kernel filesystem devs 5 years ago when they wrote the ext4 realtime discard support. That means if you are lucky enough to own one of these drives, you can finally use the discard mount option without a loss of performance. I haven't seen any benchmarks of using discard as a mount option vs. calling fstrim routinely (eg. daily). For me, I would continue to use fstrim since it is a well tested and known commodity and I've yet to see a benchmark saying realtime discards improve performance over nightly fstrim calls. If I did decide to use the "discard" feature of ext4, I would still use a cron entry to call fstrim on a regualr bases (eg. daily) === Once you resolve the "trim" issue, this wiki page has a bunch of additional good info on it about other performance issues. http://en.opensuse.org/SDB:SSD_performance This page was mostly written in 2011 and the first one was from 2010 era. Neither make reference to the newer SSDs that implement trim as an asynchronous command, so neither recommend the use of the realtime trims. Also I don't think either talks about trim as an unreliable command. Again, it would be nice if someone would udate them to talk about some of what I wrote above. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, March 11, 2014 08:39:05 Greg Freemyer wrote:
On Mon, Mar 10, 2014 at 6:05 PM, Thomas Taylor
wrote: Are SSDs (solid state drives) well supported in factory versions? Anybody have any hints or gotchas?
Thanks, Tom
The recommendations about using discard did not provide any background to let you make the right choice.
Here's some background info that should help: ============
openSSUE has been ready for SSDs since 11.4, so no features from factory are needed.
This background wiki page is mostly from 2010. It would be great if knowledgeable people would update it, as it has gotten a little out of date:
https://en.opensuse.org/SDB:SSD_discard_(trim)_support
To the best of my knowledge it is still a good SSD "trim" primer, so please take a minute to read it and understand the trim technology.
Note that it says "As of 11.4, fstrim is part of the linux-util package and is the recommended choice for invoking trim for most users." The incorporation of fstrim into 11.4 was the point I claim opensuse was fully SSD ready.
Also note that it is not stated on that page, but "trim" is an ignorable command by the SSD. if it chooses for whatever reason, it can ignore trim commans. Thus trim is a highly unreliable command.
== ext4 only below, because I don't know the facts for XFS or BTRFS ==
realtime discards issued by ext4 when files are deleted may be a performance boost for some drives, but it does not mean you should not also schedule batched discards on a regular basis. fstrim in particular will walk an ext4 filesystem and send out a trim command for all unallocated blocks in the filesystem. Calling fstrim nightly will ensure any ignored trims eventually get acted on by the SSD.
As a user you have two choices with ext4, add a discharge arg to mount (via fstab) which will cause realtime trims. And/or call fstrim via cron on a regular basis.
It is only in the last year or so that "trim" is an asynchronous / queue-able command to some drives. For all others, it is a synchronous, non-queue-able command. For the majority of the installed SSDs, that means it is synchronous. I don't know how to tell which way a given drive handles trim commands, so I would default assume any drive I was installing was synchronous.
For SSDs that implement trim as a synchronous command:
- realtime discards by the filesystem driver mean every trim command causes a cache flush on the drive itself. This has proven to mean a loss of performance, not a gain. Thus for the majority of SSDs batched discard via fstrim is highly preferred as it is schedulable event and only takes 30 seconds or so to complete in most circumstances.
For SSDs that implement trim as a asynchronous command:
- This means the drives have finally reached the maturity level assumed by the kernel filesystem devs 5 years ago when they wrote the ext4 realtime discard support. That means if you are lucky enough to own one of these drives, you can finally use the discard mount option without a loss of performance. I haven't seen any benchmarks of using discard as a mount option vs. calling fstrim routinely (eg. daily). For me, I would continue to use fstrim since it is a well tested and known commodity and I've yet to see a benchmark saying realtime discards improve performance over nightly fstrim calls.
If I did decide to use the "discard" feature of ext4, I would still use a cron entry to call fstrim on a regualr bases (eg. daily)
=== Once you resolve the "trim" issue, this wiki page has a bunch of additional good info on it about other performance issues.
http://en.opensuse.org/SDB:SSD_performance
This page was mostly written in 2011 and the first one was from 2010 era. Neither make reference to the newer SSDs that implement trim as an asynchronous command, so neither recommend the use of the realtime trims. Also I don't think either talks about trim as an unreliable command.
Again, it would be nice if someone would udate them to talk about some of what I wrote above.
Hey, not the OP, but thanks for the info! AFAIK btrfs has SSD mode but that only applies to how it writes the data, fstrim is still applicable though I wouldn't recommend it for daily chron. Weekly should suffice if you're not running hp databases off of it. Someone feel free to correct me:P tangentially related, during 12.3 installation I couldn't format / as btrfs so I went with ext4. I know of the tool btrfs-covert, but is it applicable for root system partition? Or if someone has some input on how to do it without reformatting/reinstalling. Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, Mar 11, 2014 at 9:11 AM, Jason
Hey, not the OP, but thanks for the info!
AFAIK btrfs has SSD mode but that only applies to how it writes the data, fstrim is still applicable though I wouldn't recommend it for daily chron.
fstrim is a fairly lightweight process that adds no wear and tear to the SSD. It does impact everything else trying to write to the disk at the same time, but for a home PC/laptop it should be easy enough to schedule fstrim during downtime. (For laptops, maybe it could be called at either hibernate or wake-up time.) Since fstrim does not know what the SSDs view of any of the data blocks/pages is all it does is: - Walk the filesystem allocation bitmap and send trim commands to the SSD for all unallocated block ranges. - The SSD then evaluates each range, if it is already tagged as trimmed, there is nothing to do. If it is currently tagged as in use, it tags it stale/unallocated. At that point, the garbage collector can start to do it's thing. Since all that is happening is flags are being updated for the benefit of the garbage collector, it is extremely fast and it really doesn't add any wear and tear to run fstrim nightly.
Weekly should suffice if you're not running hp databases off of it.
Database writes to existing tables replace the valid data in general, they don't transition any filesystem data blocks/pages from allocated to unallocated. Therefore fstrim would have no effect at all on SSD hosting only rapidly updated database tables. Note that the standard SSD garbage collection and wear-levelling gets a heavy workout from a heavy database update load. It is just the trim functionality that adds little or no value. The exception is if the database is creating and deleting tables at a high-rate and each table is stored in its own file. Then the table/file deletes need to eventually trigger a trim. Greg -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, Mar 11, 2014 at 2:18 PM, Greg Freemyer
- The SSD then evaluates each range, if it is already tagged as trimmed, there is nothing to do. If it is currently tagged as in use, it tags it stale/unallocated. At that point, the garbage collector can start to do it's thing.
Since all that is happening is flags are being updated for the benefit of the garbage collector, it is extremely fast and it really doesn't add any wear and tear to run fstrim nightly.
It in fact prevents wear, since it lets the firmware spread the writes better. That's the whole reason I always include the discard whenever dealing with flash-like storage. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Greg, On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote:
On Tue, Mar 11, 2014 at 9:11 AM, Jason
wrote: Hey, not the OP, but thanks for the info!
AFAIK btrfs has SSD mode but that only applies to how it writes the data, fstrim is still applicable though I wouldn't recommend it for daily chron.
fstrim is a fairly lightweight process that adds no wear and tear to the SSD. It does impact everything else trying to write to the disk at the same time, but for a home PC/laptop it should be easy enough to schedule fstrim during downtime. (For laptops, maybe it could be called at either hibernate or wake-up time.)
Since fstrim does not know what the SSDs view of any of the data blocks/pages is all it does is:
- Walk the filesystem allocation bitmap and send trim commands to the SSD for all unallocated block ranges.
- The SSD then evaluates each range, if it is already tagged as trimmed, there is nothing to do. If it is currently tagged as in use, it tags it stale/unallocated. At that point, the garbage collector can start to do it's thing.
Since all that is happening is flags are being updated for the benefit of the garbage collector, it is extremely fast and it really doesn't add any wear and tear to run fstrim nightly.
Weekly should suffice if you're not running hp databases off of it.
Database writes to existing tables replace the valid data in general, they don't transition any filesystem data blocks/pages from allocated to unallocated. Therefore fstrim would have no effect at all on SSD hosting only rapidly updated database tables.
Note that the standard SSD garbage collection and wear-levelling gets a heavy workout from a heavy database update load. It is just the trim functionality that adds little or no value. The exception is if the database is creating and deleting tables at a high-rate and each table is stored in its own file. Then the table/file deletes need to eventually trigger a trim.
Thank you for your insight, it's very valuable and thorough! btw, your mails might as well be copy/pasted to the wiki:)
Greg
Kind regards, Jason -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
--
Greg Freemyer
On Thu, Mar 13, 2014 at 1:11 AM, Jason
Hi Greg,
On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote: <snip>
btw, your mails might as well be copy/pasted to the wiki:)
I wrote most of discard / trim wiki page I pointed out. I guess it is time for me to update it. Things have changed some in the last 4 years. Greg -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wed, Mar 12, 2014 at 2:20 PM, Greg Freemyer
On Thu, Mar 13, 2014 at 1:11 AM, Jason
wrote: Hi Greg,
On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote: <snip>
btw, your mails might as well be copy/pasted to the wiki:)
I wrote most of discard / trim wiki page I pointed out. I guess it is time for me to update it. Things have changed some in the last 4 years.
Would it be sensible to add fstrimming to cron.d since it's this stable and recommendable ? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wed, 12 Mar 2014 18:33, Claudio Freire
On Wed, Mar 12, 2014 at 2:20 PM, Greg Freemyer
wrote: On Thu, Mar 13, 2014 at 1:11 AM, Jason
wrote: On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote: <snip>
btw, your mails might as well be copy/pasted to the wiki:)
I wrote most of discard / trim wiki page I pointed out. I guess it is time for me to update it. Things have changed some in the last 4 years.
Would it be sensible to add fstrimming to cron.d since it's this stable and recommendable ?
Out of curiosity: what will happen if one does use "fstrim" on a rotating disc type HDD ?? Before fstrimming gets added to cron.d, that should be cleared. If not already sorted and harmless, either fstrim should get a test for SSD only, or other securing should be done. Other wise, thumbs up for Greg and his good work. - Yamaban. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wed, Mar 12, 2014 at 2:50 PM, Yamaban
On Wed, 12 Mar 2014 18:33, Claudio Freire
wrote: On Wed, Mar 12, 2014 at 2:20 PM, Greg Freemyer
wrote: On Thu, Mar 13, 2014 at 1:11 AM, Jason
wrote: On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote:
<snip>
btw, your mails might as well be copy/pasted to the wiki:)
I wrote most of discard / trim wiki page I pointed out. I guess it is time for me to update it. Things have changed some in the last 4 years.
Would it be sensible to add fstrimming to cron.d since it's this stable and recommendable ?
Out of curiosity: what will happen if one does use "fstrim" on a rotating disc type HDD ??
This: fstrim: /srv: FITRIM ioctl failed: Operation not supported Though, the real danger would be bad firmware. If fstrim was done on firmware that has bugs related to fstrim, that could do something bad. I don't know what the status of firmware in this respect is. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wednesday, March 12, 2014 18:50:05 Yamaban wrote:
On Wed, 12 Mar 2014 18:33, Claudio Freire
wrote: On Wed, Mar 12, 2014 at 2:20 PM, Greg Freemyer
wrote: On Thu, Mar 13, 2014 at 1:11 AM, Jason
wrote: On Tuesday, March 11, 2014 13:18:24 Greg Freemyer wrote: <snip>
btw, your mails might as well be copy/pasted to the wiki:)
I wrote most of discard / trim wiki page I pointed out. I guess it is time for me to update it. Things have changed some in the last 4 years.
Would it be sensible to add fstrimming to cron.d since it's this stable and recommendable ?
Out of curiosity: what will happen if one does use "fstrim" on a rotating disc type HDD ??
Before fstrimming gets added to cron.d, that should be cleared.
If not already sorted and harmless, either fstrim should get a test for SSD only, or other securing should be done.
Other wise, thumbs up for Greg and his good work.
- Yamaban.
Query /sys/block/xxx/queue/rotating? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
participants (13)
-
Aneurin Price
-
Bernhard Voelker
-
C
-
Claudio Freire
-
Felix Miata
-
Greg Freemyer
-
Hans Witvliet
-
James Knott
-
Jason
-
Thomas Taylor
-
Thomas Taylor
-
Trent Hawkins
-
Yamaban