[opensuse] Migrating from HDD to SSD
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt. I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot? Would it be better to dd the boot sector, then create the new partitions and rsync them across? Yes, I will need to adjust fstab to point to the relocated partitions, but I'd rather not do a full re-install if I don't need to. Thanks in advance, & Happy New Year. Rodney. -- ============================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ============================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/30/2014 10:25 PM, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Ultimately migrating is just ... Migrating. That this is HD->SSD is no different from migrating from HD(IDE) to HD(SATA) for example. What you do need to consider is your underlying strategy. First, the system as a whole, distribution plus either KDE or Gnome, boot, ROOT, /usr, /usr/share -- that I have partitioned off, /usr/local, /opt, can all fit in less that 20G. Yes, you can put all of that on the SSD. But what else? How much parallelism do you want? A lot gets back to your application set. I do a fair bit of photo processing and in my case there is little advantage to having /boot, root, or any binaries on ultrafast storage. Once the program is loaded, that it. Neither do I need fast ~/Photogrphs since I only ever load and save and speed isn't that critical there. But I do need huge buffers and scratchpad, and if I direct them to /tmp and put /tmp on fast storage, since I don't have the RAM for a tmpfs ... Oh wait, if I make the SWAP fast storage and use /tmp as tmpfs overflowing onto SWAP .... And paging to SWAP ... And SWAP is SSD. You see what I mean about strategy? But then again, some people run a web site and some people run a database backed web site and others run a database server or network file server ... Some might want to put all of /srv and/or /var on the SSD. What you put on the SSD depends on your context, 'cos Context is Everything To me the idea of putting all of /boot on fast storage that I can use to accelerate an application that I spend most of my time in seems like an ${EXPLETIVE} waste. I boot only when I have new hardware or a new kernel. (or a extended power fail) I can make another good strategy case to have /mnt/ssd/{bin,lib} and set PATH and LD_LIBRARY_PATH and of course LD_PRELOAD, and move the binaries that are applicable to the particular application set you are using that day. I have a 20G /tmp on the hard drive, and think of using an overlay tmpfs over that when I run some applications. Focus, Focus, Focus. Of course if you have a big enough SSD all this is much softened. As I say, with your 120G SD the basic system isn't going to be more than 2OG leaving you 100G to play with. 10G swap, 10G /tmp. Who needs a 12oG device? Oh right, its a web server or SAMBA master. But even so, stop and think about WHERE you will need the speed. * How often do you reboot your system? * How often do you log out and log in? * What applications do you run that consume /tmp or SWAP? It may be that having a SSD will not speed up your work and workflow. For me and the applications I run, moving from 2-core CPU to a 4-core CPU produced a dramatic speedup because the application I use for photo-processing has code that can make use of that. Right now, more memory would be the cheapest possible upgrade for speed. However I'm more concerned about backup and archiving. I know the cloud is the right answer to part of that but my cable provider has a odd sales plan: to get more total byes per month I have to upgrade speed, which I don't need, and that means upgrading equipment etc, at their price sheet. So while a larger capacity SSD might be a nice toy to play with its doesn’t fit in with my workflow and my strategy. Until .. http://ark.intel.com/products/82930 http://www.techpowerup.com/176640/eurocom-ships-first-8-core-xeon-e5-2690-ba... Or 12 core http://www.notebookcheck.net/Eurocom-s-Panther-5SE-is-world-s-first-12-core-... quote> Now this powerful heavyweight laptop becomes the world's first one to be equipped with the 12-cores/24-threads Intel Xeon E5-2697 v2 processor designed specifically for mobile servers. The Eurocom Panther 5SE can be configured as needed and the top processors available are the Intel Xeon E5-2695 v2 or E5-2697 v2. In addition to these, the Panther 5SE offers support for the entire E5-2600 v2 family, from E5-2620 v2 to E5-2697 v2. The quad channel memory setups can go up to 32 GB of DDR3 1600 MHz, while storage available on this mobile server can reach 6 TB (four internal 1.5 TB drives) with RAID 0,1,5,10 capability. </quote> Dream on, Anton! -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Dec 31, 2014 at 12:09 AM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 12/30/2014 10:25 PM, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Ultimately migrating is just ... Migrating. That this is HD->SSD is no different from migrating from HD(IDE) to HD(SATA) for example.
False, false, false == HD's don't treat unallocated space special; SSDs do. dd doesn't treat unallocated special; rsync ignores it. Thus in general dd is not a great tool to use when initializing SSDs. == Thus, using dd to go from a HD to another HD is great. Neither one pay any attention to the allocated / unallocated bifurcation. == Any time the destination is a SSD, I would use rsync so that only allocated data is copied to it. == It "could be" that running dd to copy all the data over followed by running fstrim to inform the SSD about where all the unallocated space is will work out well. On the other hand, modern mkfs will inform the SSD that all the space in the filesystem is unallocated. Then rsync will only allocate the space needed. Thus mkfs followed by rsync is pretty much gaureenteed to not over allocate space on the SSD, but using dd depends on fstrim doing its job well. == Even with the mkfs / rsync process pay attention to the "discard": "mkfs discard" for ext3/ext4 based filesystems. It should be default, but I would pass it in just to be sure. For XFS, mkfs defaults to using discard at mkfs time. You have to pass in -K to prevent it from happening. == Then for rsync, you want to ensure sparse files on the HDD stay sparse files on the SSD. You use the --sparse argument for that. If you screw that up, I don't know of any routine maintenance tool that will search a file for sections of nulls and convert them to sparse. You need to get it right the first time. Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer composed on 2014-12-31 18:06 (UTC-0500):
Even with the mkfs / rsync process pay attention to the "discard":
"mkfs discard" for ext3/ext4 based filesystems. It should be default, but I would pass it in just to be sure.
13.1's and 13.2's mkfs.ext4 man pages say discard is set as default. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 31 Dec 2014 18:06:38 Greg Freemyer wrote:
On Wed, Dec 31, 2014 at 12:09 AM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 12/30/2014 10:25 PM, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Ultimately migrating is just ... Migrating. That this is HD->SSD is no different from migrating from HD(IDE) to HD(SATA) for example.
False, false, false ==
HD's don't treat unallocated space special; SSDs do.
dd doesn't treat unallocated special; rsync ignores it.
Thus in general dd is not a great tool to use when initializing SSDs.
== Thus, using dd to go from a HD to another HD is great. Neither one pay any attention to the allocated / unallocated bifurcation.
== Any time the destination is a SSD, I would use rsync so that only allocated data is copied to it.
== It "could be" that running dd to copy all the data over followed by running fstrim to inform the SSD about where all the unallocated space is will work out well.
On the other hand, modern mkfs will inform the SSD that all the space in the filesystem is unallocated. Then rsync will only allocate the space needed.
Thus mkfs followed by rsync is pretty much gaureenteed to not over allocate space on the SSD, but using dd depends on fstrim doing its job well.
== Even with the mkfs / rsync process pay attention to the "discard":
"mkfs discard" for ext3/ext4 based filesystems. It should be default, but I would pass it in just to be sure.
For XFS, mkfs defaults to using discard at mkfs time. You have to pass in -K to prevent it from happening.
== Then for rsync, you want to ensure sparse files on the HDD stay sparse files on the SSD. You use the --sparse argument for that. If you screw that up, I don't know of any routine maintenance tool that will search a file for sections of nulls and convert them to sparse. You need to get it right the first time.
Greg -- Greg Freemyer
Thanks, Anton, Greg and all others who replied. This is exactly the info I was after. Anton, your comments re use-cases are noted. I don't leave this system running 24/7 - I boot it as needed, so moving the OS to the SSD will benefit by reducing boot times (yes, OK, os I'm impatient). :) Reducing application load times won't hurt either. I'll consider carefully what else I'll put on the SSD. I have 16GB RAM and a quad-core i5-2400 so neither of those things should represent performance bottlenecks. /tmp, /sys/, /dev/shm, /var/run and /var/lock already exist as tmpfs (50% of system ram is currently allocated, but I could comfortably reduce that if needed as usage normally sits around 1% unless I'm creating a DVD ISO image). Swap could comfortably live on the SSD, but I was more concerned about the number of write cycles that would use. Mind you, the system spends little to no time swapping under normal usage - I can fairly comfortably run with swap turned off most of the time. Greg, thanks for the tips - knowing your experience with storage systems I was hoping you'd have some useful hints, and you came through. :) The disk is a Samsung 850 PRO and Samsung actually provide a native linux CLI tool for management. They do say, though, that (for their tool) trim is only supported for ext4. I understand that other filesystems have native trim support built in, but I'm most comfortable with ext3/ext4 at this point in time anyway (although I do use xfs on at least one data storage partition). I think I now have a way forward, anyway, with a backout strategy if I stuff it up along the way. Regards and Happy New Year to all. Rodney. -- ============================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ============================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Jan 1, 2015 at 6:10 AM, Rodney Baker <rodney.baker@iinet.net.au> wrote:
The disk is a Samsung 850 PRO and Samsung actually provide a native linux CLI tool for management. They do say, though, that (for their tool) trim is only supported for ext4. I understand that other filesystems have native trim support built in, but I'm most comfortable with ext3/ext4 at this point in time anyway (although I do use xfs on at least one data storage partition).
Beware of filesystem's with built in trim. Basically if you pass in an argument at mount time, you are engaging realtime trim. If you have to schedule a nightly (or weekly) trim command, then it is batched trim. == Batched trim is basically safe and a performance benefit for any SSD. Realtime trim is a different matter. For almost all SSDs sold (or designed) in 2013 or before, it is a performance hit. Newer SSDs support asynchronous trim. That is the feature need to have before realtime trim is a net benefit. For me, I still stick to batched mode exclusively. FYI: Windows runs batched mode only as far as I know, so realtime trim has not been an important feature for SSD manufacturers. Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Jan 1, 2015 at 6:10 AM, Rodney Baker <rodney.baker@iinet.net.au> wrote:
The disk is a Samsung 850 PRO and Samsung actually provide a native linux CLI tool for management. They do say, though, that (for their tool) trim is only supported for ext4. I understand that other filesystems have native trim support built in, but I'm most comfortable with ext3/ext4 at this point in time anyway (although I do use xfs on at least one data storage
On Fri, 2 Jan 2015 17:35:43 Greg Freemyer wrote: partition).
Beware of filesystem's with built in trim.
Basically if you pass in an argument at mount time, you are engaging realtime trim.
If you have to schedule a nightly (or weekly) trim command, then it is batched trim.
== Batched trim is basically safe and a performance benefit for any SSD.
Realtime trim is a different matter. For almost all SSDs sold (or designed) in 2013 or before, it is a performance hit. Newer SSDs support asynchronous trim. That is the feature need to have before realtime trim is a net benefit.
For me, I still stick to batched mode exclusively.
FYI: Windows runs batched mode only as far as I know, so realtime trim has not been an important feature for SSD manufacturers.
Greg -- Greg Freemyer
Thanks again for everone's input. Successful migration completed a few days ago with no real problems (a couple of minor fights with grub2 to stop it mounting the old / from the soon-to-be-decommissioned hdd instead of from the new ssd...). Boot times and program load times are dramatically improved. :) Tonight I've made some more changes, converting 2x 1TB RAID 1 arrays (one held /home, the other a data storage partition) into 2x 1TB RAID 10 arrays, with a corresponding improvement in read/write speed for /home (noticeably speeding up KDE4 startup times as well). Not as big a jump as an SSD, but redundancy plus improved I/O performance is a plus in my book. :) All of this just goes to prove that you can throw fast processors and lots of RAM at the system (16GB in my case), but then the performance limiting factor becomes the disk I/O system. A lot of processes on oS13.1/KDE4 seem to be I/O bound (probably due to the modularity achieved through lots of dependencies on shared libraries). Still - it's finally feeling like it's getting closer to it's performance potential. Once 2TB SSD prices come down a bit more, I'll have to replace some more HDD's with SSD's. Regards to all, Rodney. -- ============================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ============================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 16 Jan 2015 01:50:15 +1030 Rodney Baker wrote:
All of this just goes to prove that you can throw fast processors and lots of RAM at the system (16GB in my case), but then the performance limiting factor becomes the disk I/O system.
Thanks, Rodney ... I was curious to know your impressions :-) I performed a nearly identical migration last May of the sole drive in my 2.0 GHz Core2Duo based laptop ... to a Samsung 840 PRO. This upgrade boosted the system's performance so dramatically that I've indefinitely postponed replacing it. And it's not just faster, it's also runs much cooler and quieter. regards, Carl -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Wed, 31 Dec 2014 13:55:08 +1030 Rodney Baker <rodney.baker@iinet.net.au> пишет:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
It may work if partition layout remains exactly the same. Do not forget that you also need to copy post-MBR space where bootloader is likely installed. I usually prefer to simply copy files and reinstall bootloader in this case - it makes sure everything is setup properly and nothing is forgotten. It also takes less time than dd of raw partition. But it does require more adjustments (because filesystems UUIDs change). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Rodney Baker composed on 2014-12-31 13:55 (UTC+1030):
I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
You could, but...
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Rsync after creates definitely would if your HD partitioning was done long ago using an anachronistic partitioning scheme's logical heads and sectors of 255 or 240 and 63, which cost performance on 4k sector devices.
Yes, I will need to adjust fstab to point to the relocated partitions, but I'd rather not do a full re-install if I don't need to.
I do a lot of cloning, and in fact am in the wait process of a clone operation as I write this, 2nd since midnight. I rarely use dd for cloning though. Apps made specifically for the purpose are more convenient, and help avoid mistakes that cost data and time. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
Checkout clonezilla.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Yes, I will need to adjust fstab to point to the relocated partitions, but I'd rather not do a full re-install if I don't need to.
Thanks in advance, & Happy New Year. Rodney.
-- ============================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ============================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-- L. de Braal BraHa Systems NL - Terneuzen T +31 115 649333 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/31/2014 04:25 AM, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Yes, I will need to adjust fstab to point to the relocated partitions, but I'd rather not do a full re-install if I don't need to.
Thanks in advance, & Happy New Year. Rodney.
This is why i like LVM2, you can do a live migration - add your disk, configure it in LVM2 and "pvmove" your content to the new disk. No need to alter fstab. Only bootloader needs to be altered, if you want to remove the old disk. If you don't use LVM2, i would boot from CD/USB in rescue mode, create filesystems on the ssd and copy the data with rsync. Take care to preserve metadata (permissions, symbolic links, extended attributes ...) - if you don't want to care - use dd. Alter the fstab and try to boot. If it works, you can remove the old filesystems, if not, rescue boot and try again. I would use grub2-install or yast2 after booting with ssd, if you want to move the /boot or the boot sector.
Florian Gleixner wrote:
On 12/31/2014 04:25 AM, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
I have these already as separate partitions - can I use dd to copy the boot sector and relevant partitions to the SSD and then connect it up in place of the first hdd (currently /dev/sda) and expect the system to boot?
Would it be better to dd the boot sector, then create the new partitions and rsync them across?
Yes, I will need to adjust fstab to point to the relocated partitions, but I'd rather not do a full re-install if I don't need to.
Thanks in advance, & Happy New Year. Rodney.
This is why i like LVM2, you can do a live migration - add your disk, configure it in LVM2 and "pvmove" your content to the new disk. No need to alter fstab. Only bootloader needs to be altered, if you want to remove the old disk.
If you don't use LVM2, i would boot from CD/USB in rescue mode, create filesystems on the ssd and copy the data with rsync. Take care to preserve metadata (permissions, symbolic links, extended attributes ...) - if you don't want to care - use dd. Alter the fstab and try to boot. If it works, you can remove the old filesystems, if not, rescue boot and try again.
I would use grub2-install or yast2 after booting with ssd, if you want to move the /boot or the boot sector.
When an LVM works (Linux LVM or any other LVM), it works great. When an LVM fails, it makes the mess 3x worse to clean up and get your system back up and running. I use LVM *only* in circumstances in which an LVM is needed. Otherwise, all it does is create more points of failure. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/05/2015 03:55 PM, Joe Zappa wrote:
When an LVM works (Linux LVM or any other LVM), it works great.
When an LVM fails, it makes the mess 3x worse to clean up and get your system back up and running.
Long before we had a functioning LVM with Linux I was using the "same thing" under AID. That was way back in the mid 1990s, though I believe that the Vx system, the "Veritas manager" was IBMs from the beginning of the 1990s, with a shoo-in to OSF. HP later licenced it. No discussion of LVM history should take place without mentioning Heinz Mauelshagen, who did some of the earliest LVM on Linux work. There was some variance with Linux's LVM1, but the command set we see in LVM2 is the same as I was using all the way up to AIX5.2 when I switched over to Redhat, mandrake and eventually Suse. I found LVM2 to be stable, though I worked with with grub-orig and a boot and ROOT on real partitions. When grub2 came along I moved ROOT to LVM quite successfully and with no problems at all. The disaster struck! LVM was 'optimized" and "lvmetad" was introduced. Go back through the archives and you can see the problems we had if the initrd had a mismatch over that, if the 'daemon' was expected but wasn't there. So we turned lvmetad off, but something keeps turning it back on and the system won't reboot and Rescue Mode is required. https://forums.opensuse.org/showthread.php/495141-boot-problem-after-lvm2-up... https://bugzilla.redhat.com/show_bug.cgi?id=989607 https://bugzilla.redhat.com/show_bug.cgi?id=813766 Peter Rajnoha's suggestion makes sense Sadly this was resolved with an "errata" notice! And this has NOTHING to do with systemd, don't start on about that!
I use LVM *only* in circumstances in which an LVM is needed.
Otherwise, all it does is create more points of failure.
The "only when" is a YMMV. By the same logic, using BtrFS (or could that be any FS?) creates more points of failure. Yet the logic of BtrFS is that it should subsume all disk space in order to balance/optimize. YMMV. As things stand I see no advantage to using lvmetad in my use case. I don't have a LVM that spans multiple spindles. Enabling it adds a risk that your boot may be screwed. It certainly has been the case with me, so I have it disabled. http://www.redbooks.ibm.com/abstracts/redp0107.html -- STATUS QUO is Latin for "the mess we're in." -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
On 01/05/2015 03:55 PM, Joe Zappa wrote:
When an LVM works (Linux LVM or any other LVM), it works great.
When an LVM fails, it makes the mess 3x worse to clean up and get your system back up and running.
Long before we had a functioning LVM with Linux I was using the "same thing" under AID. That was way back in the mid 1990s, though I believe that the Vx system, the "Veritas manager" was IBMs from the beginning of the 1990s, with a shoo-in to OSF. HP later licenced it.
No discussion of LVM history should take place without mentioning Heinz Mauelshagen, who did some of the earliest LVM on Linux work.
There was some variance with Linux's LVM1, but the command set we see in LVM2 is the same as I was using all the way up to AIX5.2 when I switched over to Redhat, mandrake and eventually Suse.
I found LVM2 to be stable, though I worked with with grub-orig and a boot and ROOT on real partitions. When grub2 came along I moved ROOT to LVM quite successfully and with no problems at all.
The disaster struck! LVM was 'optimized" and "lvmetad" was introduced. Go back through the archives and you can see the problems we had if the initrd had a mismatch over that, if the 'daemon' was expected but wasn't there. So we turned lvmetad off, but something keeps turning it back on and the system won't reboot and Rescue Mode is required. https://forums.opensuse.org/showthread.php/495141-boot-problem-after-lvm2-up... https://bugzilla.redhat.com/show_bug.cgi?id=989607 https://bugzilla.redhat.com/show_bug.cgi?id=813766 Peter Rajnoha's suggestion makes sense Sadly this was resolved with an "errata" notice!
And this has NOTHING to do with systemd, don't start on about that!
I use LVM *only* in circumstances in which an LVM is needed.
Otherwise, all it does is create more points of failure.
The "only when" is a YMMV. By the same logic, using BtrFS (or could that be any FS?) creates more points of failure. Yet the logic of BtrFS is that it should subsume all disk space in order to balance/optimize.
YMMV.
As things stand I see no advantage to using lvmetad in my use case. I don't have a LVM that spans multiple spindles. Enabling it adds a risk that your boot may be screwed. It certainly has been the case with me, so I have it disabled.
Here is a typical use-case where LVM is a good idea: Need for mirroring: Site -- Stock broker firm, national corporate office Host use -- order entry system (buy, sell, etc.) Host configuration: 2 16-cpu Sequent hosts running dynix in a cluster as shown below: +-------+ +---------+ +--------+ | =============== SCSI =============== | | Host1 =============== DISK =============== Host 2 | | =============== CABINET =============== | +-------+ SCSI Cables +---------+ SCSI cables +--------+ Host database Host with OS disks with OS on local on local disks disks (mirrored) (mirrored) LVM justification 1. Database is mirrored for redundancy (fault tolerance) 2. THIRD mirror for backup purposes. To minimize the amount of time the database is offline, at market closing, the database would be shutdown for a short period of time, during which the third mirror would be broken off. The database tables were stored in the vendor's optimized format, and backups could be made by referencing the partition name (/dev/sc[controller]d[disk]p[partition] or something like that). Backups done as follows: 1. database shutdown 2. 3rd mirrors of partitions disassociated from 1st & 2nd mirrors 3. database restarted 4. OS & local disks backed up [Level 0 on Saturday] 5. 3rd mirror used as data source for database backup 6. backup tape images compared to 3rd-mirror images for verification of flawless backup. 7. Once backup is verified, reassociate 3rd mirror with 1st & 2nd mirrors. Why the LVM? Because steps 5 & 6 would take several hours. By using the 3rd mirror, this gave us up to 24 hours to complete a backup, because the database could still be used during business hours. Another use-case that justifies LVM: General Motors Tech Center, High Performance Computing Group Supercomputers running high-end analysis & engineering apps, such as fine-grained finite-element analysis of collision scenarios. These computer are expensive to even OWN -- just sitting on the floor, they depreciate at a rate of many dollars/day. Hosts contain several dozen CPUs.... most jobs are disk I/O bound. Solution -- disk striping, with 3 disks per stripe. No mirroring -- loss of data is not a catastrophic loss, and the money is better spent on striping (speed) than mirroring (security). RAID 5 and RAID 6 are good for online medium-term archiving (i.e. one step above tape archives)... data-loss is expensive, so you want to protect it, but the slow write speed (due to recomputing the check-sums) is acceptable, so mirroring is not needed.
http://www.redbooks.ibm.com/abstracts/redp0107.html -- STATUS QUO is Latin for "the mess we're in."
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 07/01/2015 17:41, Joe Zappa a écrit : if somebody could give me let alone a small part of such system, I would be pretty excited :-) but, you know, people do (when they think it's obsolete for them)... but I don't want it. I have for example somewhere a scsi bi-xeon two hard raid nec server (two hot swap alims...) and I played with it sometime, but * extremely power hunger * 18 kg * 16" large, twice that long (only 10 high) as loud as a grass cutter... funny, but not for me. thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd wrote:
Le 07/01/2015 17:41, Joe Zappa a écrit :
if somebody could give me let alone a small part of such system, I would be pretty excited :-)
Yeah. But corporations don't think in such ways. Computers are just another tool, and automakers want results as quickly as they can afford to pay for. You or I would keep a 96-CPU system for 15 years, or more, if we could keep it running. In the the business sector in the U.S., it's obsolete fully depreciated in 7 years, and will be replaced at that time.
but, you know, people do (when they think it's obsolete for them)... but I don't want it.
I have for example somewhere a scsi bi-xeon two hard raid nec server (two hot swap alims...) and I played with it sometime, but
* extremely power hunger * 18 kg * 16" large, twice that long (only 10 high)
as loud as a grass cutter...
funny, but not for me.
thanks jdd
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2014-12-31 04:25, Rodney Baker wrote:
Hi all. A question for those with a little more experience with SSD's. I want to move oS 13.1 from conventional HDD's to a 128GB SSD, everything except /home. /var and some other general data storage partitions (so basically /boot, /, /usr, /usr/local and /opt.
You could also consider using dcache instead. Or place /boot on SSD entirely, for fast booting, then dedicate the rest for dcaches of the hard disk partitions. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
participants (11)
-
Andrei Borzenkov
-
Anton Aylward
-
Carl Hartung
-
Carlos E. R.
-
Felix Miata
-
Florian Gleixner
-
Greg Freemyer
-
jdd
-
Joe Zappa
-
Leen de Braal
-
Rodney Baker