[opensuse] vmware and fake scsi devs
It seems there is one major downside to all the disks being "called" scsi devices in 10.3. There are applications that take /dev/scx literally as a scsi disk x. One such application is vmware. This ended being a major problem when i tried to use a raw partition as a physical vmware disk on my laptop. The actual disk is an ide device, but the first partition in my Suse 10.3 install is called /dev/sda1 and that is what it passes to vmware as the actual physical disk. Vmware has for a long time stated that it does not boot from physical scsi devices and thus it keels over when it sees that the raw device is scsi. Is there an easy way to pass the actual partition type to vmware (workstation 4.x or server 1.x) in 10.3 or do i go back to 10.2? thanks, d. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
kanenas@hawaii.rr.com escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ? http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09 -- "The only thing that interferes with my learning is my education." - Albert Einstein Cristian Rodríguez R. Platform/OpenSUSE - Core Services SUSE LINUX Products GmbH Research & Development http://www.opensuse.org/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 03 December 2007 10:09:38 pm Cristian Rodríguez wrote:
kanenas@hawaii.rr.com escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ?
http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09
I'm sorry Cristian, but there is a downside, and a big annoyance. By being limited in the partitions available. Especially with the huge drives that are available today, That is a very big downside. Not only that, when I installed 10.3, it renamed my other two IDE drives, and changed their order. Really !!! Why ? I know, I know !! There was a big discussion on this list about that awhile back. We patiently await fixing the partition limitation. Soon? I hope ? Bob S -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 3, 2007 9:44 PM, Bob S <911@sanctum.com> wrote:
On Monday 03 December 2007 10:09:38 pm Cristian Rodríguez wrote:
kanenas@hawaii.rr.com escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ?
http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09
I'm sorry Cristian, but there is a downside, and a big annoyance. By being limited in the partitions available. Especially with the huge drives that are available today, That is a very big downside. Not only that, when I installed 10.3, it renamed my other two IDE drives, and changed their order. Really !!! Why ?
I know, I know !! There was a big discussion on this list about that awhile back. We patiently await fixing the partition limitation. Soon? I hope ?
If you need lots of partitions (/dev/hdx naming) use the workaround from the release notes. I seriously doubt 10.3 will ever support increased partitions for /dev/sdx devices. I have read nothing on the kernel ata list that shows anyone is even working on it yet. Of course Novell could be doing so on their own, but I would be very surprised. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Bob S escribió:
By being limited in the partitions available. Especially with the huge drives that are available today.
LVM exists in order to preserve mental sanity in such setups. -- "The only thing that interferes with my learning is my education." - Albert Einstein Cristian Rodríguez R. Platform/OpenSUSE - Core Services SUSE LINUX Products GmbH Research & Development http://www.opensuse.org/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Cristian Rodríguez wrote:
Bob S escribió:
By being limited in the partitions available. Especially with the huge drives that are available today.
LVM exists in order to preserve mental sanity in such setups.
How does this permit to use more sdx devices than libata allows? regards Eberhard -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2007-12-04 at 08:02 +0100, Eberhard Roloff wrote:
Cristian Rodríguez wrote:
Bob S escribió:
By being limited in the partitions available. Especially with the huge drives that are available today.
LVM exists in order to preserve mental sanity in such setups.
How does this permit to use more sdx devices than libata allows?
I guess they propose we reformat everything as a huge LVM thing, which we then break up into smaller partitions, inside the LVM. That does not solve my problem and posses new problems. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVT7UtTMYHG2NR9URAimYAJ45dgMJsar7W0g5JyuV24xlVRSSowCfbiZo i3pRt2NRWUPsJonRENJ6X/k= =kQly -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2007-12-04 at 01:51 -0300, Cristian Rodríguez wrote:
LVM exists in order to preserve mental sanity in such setups.
Per your own documentation: Warning ] Using LVM might be associated with increased risk, such as data loss. ] Risks also include application crashes, power failures, and faulty ] commands. Save your data before implementing LVM or reconfiguring ] volumes. Never work without a backup. http://localhost/usr/share/doc/manual/opensuse-manual_en/manual/sec.yast2.sy... Nice... proposing LVM as a workaround for multiple partitions, when multiple partitions are often used as a safety precaution to limit damage. :-/ - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVUN9tTMYHG2NR9URAr7AAJ9Eb5o5C1vbIYI8NfVwnC2v05xRsACeKdea EdDaeie7sLjnt/r7RWYXC5g= =EkVg -----END PGP SIGNATURE-----
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem? We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable). Also, we use LVM on top of md RAID1 volumes(using libata) here and have found them to be quite stable and have good performance. Plus you then also get the flexibility to resize partitions as needed (with some some restrictions, of course). -jc -- ******************************** J.C. Polanycia Information Technology Services University of Colorado -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jc Polanycia wrote:
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem?
-to use existing installations? -to use an already existing windows without the need to either install again or do a p2v ? -to drastically increase vmware machines' "harddisk" performance? -because it is possible to do so?
We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable).
sure. But there is also the reverse side of the coin. regards EbR -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 04 December 2007 07:32, Eberhard Roloff wrote:
Jc Polanycia wrote:
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem?
-to use existing installations? -to use an already existing windows without the need to either install again or do a p2v ? -to drastically increase vmware machines' "harddisk" performance? -because it is possible to do so?
I concur. I have a separate physical drive for my VMware guest (Windows XP) and don't want to impose the non-negligible overhead of using virtual (host file-backed) disk drives. Depending on circumstances, I would make different choices, of course, but when raw performance is a concern (and face it Windows ain't generally snappy to begin with), physical drive access is called for.
We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable).
sure. But there is also the reverse side of the coin.
Exactly. It's a tradeoff. As with most engineering / technical decisions, it can't be made mindlessly, unless you like to live on luck.
regards EbR
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 4, 2007 6:28 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem? We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable). Also, we use LVM on top of md RAID1 volumes(using libata) here and have found them to be quite stable and have good performance. Plus you then also get the flexibility to resize partitions as needed (with some some restrictions, of course).
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks). Chris
-jc
-- ******************************** J.C. Polanycia Information Technology Services University of Colorado
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Chris Worley wrote:
On Dec 4, 2007 6:28 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem? We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable). Also, we use LVM on top of md RAID1 volumes(using libata) here and have found them to be quite stable and have good performance. Plus you then also get the flexibility to resize partitions as needed (with some some restrictions, of course).
Off topic, as I seldom partition anything (unpartitioned drives perform best),
But huge filesystem perform poorly.
but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
This is why I do NOT used LVM for home use. Basically, I'm not diligent enough in my backups to run the additional risk of LVM corruption. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 4, 2007 10:16 AM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
Chris Worley wrote:
On Dec 4, 2007 6:28 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
I've been following this thread and have a question. Why are you running your VMWare instances on raw partitions versus image files on a filesystem? We use VMWare a bit over here and found that using image files gives us more flexibility in with our VMs(portable, easily duplicatable). Also, we use LVM on top of md RAID1 volumes(using libata) here and have found them to be quite stable and have good performance. Plus you then also get the flexibility to resize partitions as needed (with some some restrictions, of course).
Off topic, as I seldom partition anything (unpartitioned drives perform best),
But huge filesystem perform poorly.
Qualify that with "on small files". Chris
but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
This is why I do NOT used LVM for home use. Basically, I'm not diligent enough in my backups to run the additional risk of LVM corruption.
--
To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 04 December 2007 09:08, Chris Worley wrote:
...
Off topic, as I seldom partition anything (unpartitioned drives perform best),
What impact do you believe partitioning has on disk performance? I can think of none.
...
Chris
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, 4 Dec 2007 09:20:54 -0800 Randall R Schulz <rschulz@sonic.net> wrote:
On Tuesday 04 December 2007 09:08, Chris Worley wrote:
...
Off topic, as I seldom partition anything (unpartitioned drives perform best),
What impact do you believe partitioning has on disk performance? I can think of none.
Actually, there are some things you can do to improve performance. A small partition limits the amount of head movement, but only when you are accessing files from that partition. On the other side of the coin, a small partition can become fragmented. While the Linux file systems generally reduce fragmentation, when a partition starts to become full, you get more fragmentation. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it. Could you please provide me additional information about the circumstances under which you had the LVM layer corrupt? Thanks. -jc -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it.
I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid. "Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion. While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
Could you please provide me additional information about the circumstances under which you had the LVM layer corrupt? Thanks.
It's been too long, so, no, I can't. I vaguely remember something simple being overwritten in the LVM metadata, and there being no method to recover it (except from backup). I had similar issues with ReiserFS. While there were some provisions for recovery, they were insufficient. As FS'es go, Ext3 has the most bullet-proof recovery mechanisms when the metadata has been compromised. Realize that all the layers above the MD device, LVM and whatever FS, add more complexity and therefore decrease performance and reliability. Note that the metadata corruption I'm referring to is not a spindle failure issue: you can corrupt the metadata, and your RAID preserves the corruption! MD device metadata can be recreated if you know the incantation used to initially create the RAID (and it's usually simple enough that you can recreate the RAID on the fly from memory). So, I stick w/ Ext3 and MD. Chris
-jc --
To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it.
I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory. Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 4, 2007 11:49 AM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it.
I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics).
Amdahl's law defies "Information theory"? How so? If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count). Chris
--
To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 4, 2007 1:49 PM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it.
I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics).
Single threaded access to a raid array may not be helped by adding drives. Drive access can end up being sequential and your not really buying anything. Multi-threaded storage performance is definitely positively affected by adding disks to an array. For multi-threaded, effectively each disk can do N IOPS (IOs per Second.) So if you have M drives, you can do M*N IOPS. The trouble with Raid 5 is that it typically requires 4 IOs to update a single sector. ie. Read checksum, Read original sector, (so you can remove it from the checksum) write updated sector write new checksum. So it ends up being M*N / 4 IOPS. So from a performance perspective on _writes_ you need at least a 4 drive array just to be as fast as a single disk. Reads OTOH just need to read the sector they want (unless you have a failed drive). So _read_ performance is M*N. Or always faster than a single drive. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 5, 2007 1:50 PM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Dec 4, 2007 1:49 PM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it.
I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics).
Single threaded access to a raid array may not be helped by adding drives. Drive access can end up being sequential and your not really buying anything.
Multi-threaded storage performance is definitely positively affected by adding disks to an array.
For multi-threaded, effectively each disk can do N IOPS (IOs per Second.)
So if you have M drives, you can do M*N IOPS.
The trouble with Raid 5 is that it typically requires 4 IOs to update a single sector.
ie. Read checksum, Read original sector, (so you can remove it from the checksum) write updated sector write new checksum.
So it ends up being M*N / 4 IOPS.
Greg, Doesn't that assume a sector/block mismatch? If your sectors and blocks are aligned (sectors are some multiple of blocks), then no read-mask-write is necessary. Even if there is a misalignment, if the amount of data being written is large, the read-mask-write operation is only at the beginning and tail ends of the entire operation. Also, the writes are all in parallel. The above makes it sound like the writes of updated stripes, and the write of the checksum are serial... they should all be posted nearly simultaneously (some serialization introduced by the CPU).
So from a performance perspective on _writes_ you need at least a 4 drive array just to be as fast as a single disk.
Reads OTOH just need to read the sector they want (unless you have a failed drive).
So _read_ performance is M*N. Or always faster than a single drive.
On a RAID5 you only need M-1 (or M-2 for RAID6) completions of parallel operations... you can discard the slowest disks results, as that can be recreated without all the data. Chris
Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com --
To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Chris Worley wrote:
On Dec 5, 2007 1:50 PM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Dec 4, 2007 1:49 PM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia <JC.Polanycia@colorado.edu> wrote:
Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). Single threaded access to a raid array may not be helped by adding drives. Drive access can end up being sequential and your not really buying anything.
Multi-threaded storage performance is definitely positively affected by adding disks to an array.
For multi-threaded, effectively each disk can do N IOPS (IOs per Second.)
So if you have M drives, you can do M*N IOPS.
The trouble with Raid 5 is that it typically requires 4 IOs to update a single sector.
ie. Read checksum, Read original sector, (so you can remove it from the checksum) write updated sector write new checksum.
So it ends up being M*N / 4 IOPS. Greg,
Doesn't that assume a sector/block mismatch? If your sectors and blocks are aligned (sectors are some multiple of blocks), then no read-mask-write is necessary.
Even if there is a misalignment, if the amount of data being written is large, the read-mask-write operation is only at the beginning and tail ends of the entire operation.
Also, the writes are all in parallel. The above makes it sound like the writes of updated stripes, and the write of the checksum are serial... they should all be posted nearly simultaneously (some serialization introduced by the CPU).
No, they are NOT in parallel. The are issued sequentially. They may overlap (only if you have SCSI, or if all of the disks are on different controllers), but they are not in parallel. You're obviously a CS major, not an engineer. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Aaron Kulkis wrote:
No, they are NOT in parallel. The are issued sequentially. They may overlap (only if you have SCSI, or if all of the disks are on different controllers), but they are not in parallel.
You're obviously a CS major, not an engineer.
In my server, the RAID array is SCSI. On my main system, while currently using IDE drives, has several SATA ports on the motherboard. RAID can be configured in the BIOS. -- Use OpenOffice.org <http://www.openoffice.org> -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 6, 2007 2:44 PM, Aaron Kulkis <akulkis00@hotpop.com> wrote:
<snip>
Also, the writes are all in parallel. The above makes it sound like the writes of updated stripes, and the write of the checksum are serial... they should all be posted nearly simultaneously (some serialization introduced by the CPU).
No, they are NOT in parallel. The are issued sequentially. They may overlap (only if you have SCSI, or if all of the disks are on different controllers), but they are not in parallel.
I never use slave disk drives. Pure masters. As you say, Master & Slave access is always sequential. Alternatively I've setup older systems with 10 IDE channels to test (I used 5 pci cards). It fell apart speed wise on PCI. I have not tried it on PCIe. OTOH, you can setup a 4 IDE channel configuration that really pushes the drives hard. Basically goes as fast as the drives will run. And in the modern machines, we have MBs with lots of SATA ports. Those are typically one-to-one connections. And if you are using a PMP (multiplexer) people are reporting pretty good speeds. Far better than sequential. I'm not sure they are as good as SCSI. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Aaron Kulkis wrote:
You must be assuming that striping across a RAID is somehow a serial operation.
That doesn't matter. Your assumption that the data blocks from several disks can be XOR'ed together, and written to one of those disks, and the parity partition on yet another disk is FASTER than not doing so is just patently ridiculous. So you think CPUs operates more slowly than disks, Aaron? Or do you think you need to read data from a disk every time you write it?
That doesn't even count the matter of increasing the bandwidth usage by a factor of N for N disks in the RAID 5 configuration. Aaron said they are "NOT in parallel". It seems reasonable to infer that he thinks they are in serial.
No, I'm asuming that you're using RAID 5, which is what you said. So what did you mean by this: No, they are NOT in parallel. The are issued sequentially. They may overlap (only if you have SCSI, or if all of the disks are on different controllers), but they are not in parallel. If they occur at the same time, they are happening in parallel. Not synchronised, true, but simultaneous, i.e. parallel, even if the data is sent to the disks serially. All that's required is for the bus speed to be faster than the write speed and for the drives to have a write cache. You're obviously a CS major, not an engineer. And you're a psychology major? ;-) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 6, 2007 1:26 PM, Chris Worley <worleys@gmail.com> wrote:
On Dec 5, 2007 1:50 PM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
<snip>
Single threaded access to a raid array may not be helped by adding drives. Drive access can end up being sequential and your not really buying anything.
Multi-threaded storage performance is definitely positively affected by adding disks to an array.
For multi-threaded, effectively each disk can do N IOPS (IOs per Second.)
So if you have M drives, you can do M*N IOPS.
The trouble with Raid 5 is that it typically requires 4 IOs to update a single sector.
ie. Read checksum, Read original sector, (so you can remove it from the checksum) write updated sector write new checksum.
So it ends up being M*N / 4 IOPS.
Greg,
Doesn't that assume a sector/block mismatch? If your sectors and blocks are aligned (sectors are some multiple of blocks), then no read-mask-write is necessary.
Even if there is a misalignment, if the amount of data being written is large, the read-mask-write operation is only at the beginning and tail ends of the entire operation.
The above does not assume misalignment. It think what your talking about is if your are doing a large write that spans the entire raid5 stripe, then the existing parity data can be ignored. Linux is smart enough to do this, but raid5 stripes are pretty large. Typically 64K * (M - 1) I believe. So if you have a 5-disk raid 5, your entire stripe is 256KB. And that ignores alignment issues you mention, that means to guarantee a full stripe is written you need to write 512KB at a time. Not many programs do that from user space. I'm not sure how efficient the Linux kernel is a coalescing individual sequential writes to a raid5 array and trying to create full stripe updates.
Also, the writes are all in parallel. The above makes it sound like the writes of updated stripes, and the write of the checksum are serial... they should all be posted nearly simultaneously (some serialization introduced by the CPU).
That above is a max throughput calculation, not an individual write calculation. ie. Which is faster a sports car or a sem-itruck. The semi-truck is if you have lots to move, so it effectively has a higher throughput than a sports car. (but nowhere near as fast for small loads). So the above assumes a busy server with lots going on. ie every disk in the array is running at full capacity. The IOPS is obviously effected by the workload and the seeking, but once the workload is set, the IOPS per disk can be characterized and used to feed the equation.
So from a performance perspective on _writes_ you need at least a 4 drive array just to be as fast as a single disk.
Reads OTOH just need to read the sector they want (unless you have a failed drive).
So _read_ performance is M*N. Or always faster than a single drive.
On a RAID5 you only need M-1 (or M-2 for RAID6) completions of parallel operations... you can discard the slowest disks results, as that can be recreated without all the data.
No idea what you meant there. In a non-degraded raid5 every drive has valid, non-parity data on it. If you have a heavy multi-threaded read load, all disks can be actively providing valid data at one time. i.e M * IOPS Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 7, 2007 7:09 AM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Dec 6, 2007 1:26 PM, Chris Worley <worleys@gmail.com> wrote:
On Dec 5, 2007 1:50 PM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
<snip>
Single threaded access to a raid array may not be helped by adding drives. Drive access can end up being sequential and your not really buying anything.
Multi-threaded storage performance is definitely positively affected by adding disks to an array.
For multi-threaded, effectively each disk can do N IOPS (IOs per Second.)
So if you have M drives, you can do M*N IOPS.
The trouble with Raid 5 is that it typically requires 4 IOs to update a single sector.
ie. Read checksum, Read original sector, (so you can remove it from the checksum) write updated sector write new checksum.
So it ends up being M*N / 4 IOPS.
Greg,
Doesn't that assume a sector/block mismatch? If your sectors and blocks are aligned (sectors are some multiple of blocks), then no read-mask-write is necessary.
Even if there is a misalignment, if the amount of data being written is large, the read-mask-write operation is only at the beginning and tail ends of the entire operation.
The above does not assume misalignment. It think what your talking about is if your are doing a large write that spans the entire raid5 stripe, then the existing parity data can be ignored. Linux is smart enough to do this, but raid5 stripes are pretty large. Typically 64K * (M - 1) I believe. So if you have a 5-disk raid 5, your entire stripe is 256KB. And that ignores alignment issues you mention, that means to guarantee a full stripe is written you need to write 512KB at a time. Not many programs do that from user space. I'm not sure how efficient the Linux kernel is a coalescing individual sequential writes to a raid5 array and trying to create full stripe updates.
Granted: in my line of work, an app doing a single 1MB read/write call is small; anything smaller would be too trivial to mention.
Also, the writes are all in parallel. The above makes it sound like the writes of updated stripes, and the write of the checksum are serial... they should all be posted nearly simultaneously (some serialization introduced by the CPU).
That above is a max throughput calculation, not an individual write calculation. ie. Which is faster a sports car or a sem-itruck. The semi-truck is if you have lots to move, so it effectively has a higher throughput than a sports car. (but nowhere near as fast for small loads).
So the above assumes a busy server with lots going on. ie every disk in the array is running at full capacity. The IOPS is obviously effected by the workload and the seeking, but once the workload is set, the IOPS per disk can be characterized and used to feed the equation.
So from a performance perspective on _writes_ you need at least a 4 drive array just to be as fast as a single disk.
Reads OTOH just need to read the sector they want (unless you have a failed drive).
So _read_ performance is M*N. Or always faster than a single drive.
On a RAID5 you only need M-1 (or M-2 for RAID6) completions of parallel operations... you can discard the slowest disks results, as that can be recreated without all the data.
No idea what you meant there. In a non-degraded raid5 every drive has valid, non-parity data on it. If you have a heavy multi-threaded read load, all disks can be actively providing valid data at one time. i.e M * IOPS
If "M" is the number of disks, and you are, for example, reading 1 stride, then, in a RAID5, you only need to get the stripes from M-1 disks, and you can complete the single stride I/O w/o having yet received the Mth stripe, which you can discard when it shows up. Chris
Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 7, 2007 10:12 AM, Chris Worley <worleys@gmail.com> wrote:
On Dec 7, 2007 7:09 AM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
<snip>
Reads OTOH just need to read the sector they want (unless you have a failed drive).
So _read_ performance is M*N. Or always faster than a single drive.
On a RAID5 you only need M-1 (or M-2 for RAID6) completions of parallel operations... you can discard the slowest disks results, as that can be recreated without all the data.
No idea what you meant there. In a non-degraded raid5 every drive has valid, non-parity data on it. If you have a heavy multi-threaded read load, all disks can be actively providing valid data at one time. i.e M * IOPS
If "M" is the number of disks, and you are, for example, reading 1 stride, then, in a RAID5, you only need to get the stripes from M-1 disks, and you can complete the single stride I/O w/o having yet received the Mth stripe, which you can discard when it shows up.
But what if you are reading 10 strides, or more typically a whole bunch of small random reads. By default the raid logic will NOT read the parity stripe. For file _read_ operations, parity is only read if the array is degraded. In normal operation, only valid data is read, so you get to have valid data coming from all the drives in parallel. But yes at least one of the drives has to be working on a different stride than the others, so in your case try to have your reads be multiple strides long as well as your writes. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 03 December 2007 04:44:43 pm Bob S wrote:
On Monday 03 December 2007 10:09:38 pm Cristian Rodríguez wrote:
kanenas@hawaii.rr.com escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ?
http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09
after re-reading the release notes i can see that libata can be disabled, thanks:) That would imply that all fake sdx references would have to be changed to hdx , no? But what comes next? I can manually change the grub and fstab entries, also some samba and nfs shares in yast, perhaps a few more configs, but there are many other references in many many other files, how can that be dealt with after the fact? would a symlink or three do it? should i reinstall? if yes, when / where should one proceed with the workaround in a fresh install?
I'm sorry Cristian, but there is a downside, and a big annoyance. By being limited in the partitions available. Especially with the huge drives that are available today, That is a very big downside. Not only that, when I installed 10.3, it renamed my other two IDE drives, and changed their order. Really !!! Why ?
well, i could add my problem to the downside as well...
I know, I know !! There was a big discussion on this list about that awhile back. We patiently await fixing the partition limitation. Soon? I hope ?
Bob S
in 10.2 we had usbfs and smbfs as sources of similar anguish. is SuSe/linux big enough to dictate changes like that almost arbitrarily? Please note, i am not talking about technical merits... d. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2007-12-03 at 21:53 -1000, kanenas@hawaii.rr.com wrote:
after re-reading the release notes i can see that libata can be disabled, thanks:)
Only for 10.3, in version 11 this workaround might disappear. That's their intention :-(
That would imply that all fake sdx references would have to be changed to hdx , no? But what comes next? I can manually change the grub and fstab entries, also some samba and nfs shares in yast, perhaps a few more configs, but there are many other references in many many other files, how can that be dealt with after the fact? would a symlink or three do it?
No, a symlink would not work. You should change all the references to device independent ones, like label, id, or uuid (not sure if all are valid). Nowhere should you have references like "hda" or "sda". In vmware... no idea.
in 10.2 we had usbfs and smbfs as sources of similar anguish. is SuSe/linux big enough to dictate changes like that almost arbitrarily? Please note, i am not talking about technical merits...
Unfortunately, it's not suse alone, it's all of them (developers). - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVUHmtTMYHG2NR9URAnv5AJ9uEvJP4khyIt0G/6NavQ05thilcwCffUnY zZ0X8N2lAdbm12YPhQypzpo= =sJEC -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 04 December 2007 02:02:46 am Carlos E. R. wrote:
The Monday 2007-12-03 at 21:53 -1000, kanenas@hawaii.rr.com wrote:
after re-reading the release notes i can see that libata can be disabled, thanks:)
Only for 10.3, in version 11 this workaround might disappear. That's their intention :-(
That would imply that all fake sdx references would have to be changed to hdx , no? But what comes next? I can manually change the grub and fstab entries, also some samba and nfs shares in yast, perhaps a few more configs, but there are many other references in many many other files, how can that be dealt with after the fact? would a symlink or three do it?
No, a symlink would not work.
You should change all the references to device independent ones, like label, id, or uuid (not sure if all are valid). Nowhere should you have references like "hda" or "sda".
well, i looked at menu.lst and fstab in my 10.3 for the first time and saw a bunch of brave new things...I guess entropy *must* increase... i will try the "hwprobe=-modules.pata" in a day or two, but i somehow doubt it will solve my problem and then i will have to decide if i will live w. it or revert to 10.2 . btw, how does grub know about libata before the kernel is loaded and how/does it get modified?
In vmware... no idea.
vmware has had this scsi/sata problem, now it became an ide/scsi/sata one! maybe they will adapt the disk/by-id thing, who knows?
in 10.2 we had usbfs and smbfs as sources of similar anguish. is SuSe/linux big enough to dictate changes like that almost arbitrarily? Please note, i am not talking about technical merits...
Unfortunately, it's not suse alone, it's all of them (developers).
are we being asked to operate in a vacuum? is new stuff coming out just for the sake of newness?
-- Cheers, Carlos E. R.
thanks for the insight. now i know more than i wanted! d. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2007-12-04 at 08:16 -1000, kanenas@hawaii.rr.com wrote:
No, a symlink would not work.
You should change all the references to device independent ones, like label, id, or uuid (not sure if all are valid). Nowhere should you have references like "hda" or "sda".
well, i looked at menu.lst and fstab in my 10.3 for the first time and saw a bunch of brave new things...I guess entropy *must* increase... i will try the "hwprobe=-modules.pata" in a day or two,
I was forced to do that, as I have way more than 15 partitions.
but i somehow doubt it will solve my problem and then i will have to decide if i will live w. it or revert to 10.2 . btw, how does grub know about libata before the kernel is loaded and how/does it get modified?
No, grub doesn't know a word about it. If grub uses references like /dev/sda or /dev/hda, you must be sure that you use the same flavour as the kernel. Same thing applies to fstab. And if you use references like hd0, then they must also be the correct ones and the correct order (they can change if you mix pata/sata)... so that's where the new naming scheme comes in handy. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVe+0tTMYHG2NR9URAoDbAJ4oEFSXhTCcNBWl8csK6VG3bpmlKwCfdjeg vTVlSzteOUQLPK9qGhIx9rg= =zebx -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Bob S wrote:
On Monday 03 December 2007 10:09:38 pm Cristian Rodríguez wrote:
kanenas@hawaii.rr.com escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ?
http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09
I'm sorry Cristian, but there is a downside, and a big annoyance. By being limited in the partitions available. Especially with the huge drives that are available today, That is a very big downside. Not only that, when I installed 10.3, it renamed my other two IDE drives, and changed their order. Really !!! Why ?
I know, I know !! There was a big discussion on this list about that awhile back. We patiently await fixing the partition limitation. Soon? I hope ?
Bob S
What happens if you use LVM? Are you still limited to 15 partitions? -- Use OpenOffice.org <http://www.openoffice.org> -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2007-12-04 at 06:21 -0500, James Knott wrote:
Bob S wrote:
back. We patiently await fixing the partition limitation. Soon? I hope ?
What happens if you use LVM? Are you still limited to 15 partitions?
I guess not. By the way: it is not 15, it is 14: remember that the extended partition (the container) counts as one. But using LVM implies repartitioning everything. I don't see it as a solution. Developpers are very keen in pushing the scsi thing as a fit for everybody solution, regardless of the problems it causes :-( - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVUABtTMYHG2NR9URAm4RAJ9nk51N+k4lDMSjzDFh5xHhtwf5/ACfXpJu wuGZc6SXBQbtSHDqMnned/I= =mMgt -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-ID: <alpine.LSU.0.9999.0712041256450.6071@nimrodel.valinor> The Tuesday 2007-12-04 at 00:09 -0300, Cristian Rodríguez wrote:
kanenas@... escribió:
It seems there is one major downside to all the disks being "called" scsi devices in 10.3.
There is no downside, error or annoyance, it is just a well known change.. did you read the release notes ?
http://www.suse.com/relnotes/i386/openSUSE/10.3/RELEASE-NOTES.en.html#09
Nothing about vmware in there. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHVUC1tTMYHG2NR9URAohaAJ9RdweYF6SRGvQ6WpEtmNpmgXAqIQCbB1me tv+B2L7sPfYrQcCh+Fk+9i4= =wAkI -----END PGP SIGNATURE-----
kanenas@hawaii.rr.com a écrit :
It seems there is one major downside to all the disks being "called" scsi devices in 10.3. There are applications that take /dev/scx literally as a scsi disk x. One such application is vmware. This ended being a major problem when i tried to use a raw partition as a physical vmware disk on my laptop. The actual disk is an ide device, but the first partition in my Suse 10.3 install is called /dev/sda1 and that is what it passes to vmware as the actual physical disk. Vmware has for a long time stated that it does not boot from physical scsi devices and thus it keels over when it sees that the raw device is scsi. Is there an easy way to pass the actual partition type to vmware (workstation 4.x or server 1.x) in 10.3 or do i go back to 10.2? thanks, d.
Hello, I use VMware with SuSE for years, now it is VMware 6 with openSUSE 10.3 . I always boot VMware with physical SCSI drives and it works ! Furthermore, with VMware you can configure so that the first IDE unit is your sdx drive and it will work. So no problem at all. Michel. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 04 December 2007 06:33:33 am Catimimi wrote:
kanenas@hawaii.rr.com a écrit :
It seems there is one major downside to all the disks being "called" scsi devices in 10.3. There are applications that take /dev/scx literally as a scsi disk x. One such application is vmware. This ended being a major problem when i tried to use a raw partition as a physical vmware disk on my laptop. The actual disk is an ide device, but the first partition in my Suse 10.3 install is called /dev/sda1 and that is what it passes to vmware as the actual physical disk. Vmware has for a long time stated that it does not boot from physical scsi devices and thus it keels over when it sees that the raw device is scsi. Is there an easy way to pass the actual partition type to vmware (workstation 4.x or server 1.x) in 10.3 or do i go back to 10.2? thanks, d.
Hello,
I use VMware with SuSE for years, now it is VMware 6 with openSUSE 10.3 . I always boot VMware with physical SCSI drives and it works ! Furthermore, with VMware you can configure so that the first IDE unit is your sdx drive and it will work. So no problem at all.
Michel.
Michael, in my case i have an ide disk that is being called an scsi device by libata, so vmware does not know how to deal with it, i think... But I i would be very interested in your setup. How do you use the physical drives? Are they dual boot (direct boot and as vms thru vmware)? are the partitions full functioning operating systems when they are selected as vms or are they created after a physical disk is picked in a vm setup and then the os is installed? any other info would also be appreciated. if you think it goes off topic, please email directly. thanks, d. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
kanenas@hawaii.rr.com a écrit :
On Tuesday 04 December 2007 06:33:33 am Catimimi wrote:
kanenas@hawaii.rr.com a écrit :
It seems there is one major downside to all the disks being "called" scsi devices in 10.3. There are applications that take /dev/scx literally as a scsi disk x. One such application is vmware. This ended being a major problem when i tried to use a raw partition as a physical vmware disk on my laptop. The actual disk is an ide device, but the first partition in my Suse 10.3 install is called /dev/sda1 and that is what it passes to vmware as the actual physical disk. Vmware has for a long time stated that it does not boot from physical scsi devices and thus it keels over when it sees that the raw device is scsi. Is there an easy way to pass the actual partition type to vmware (workstation 4.x or server 1.x) in 10.3 or do i go back to 10.2? thanks, d.
Hello,
I use VMware with SuSE for years, now it is VMware 6 with openSUSE 10.3 . I always boot VMware with physical SCSI drives and it works ! Furthermore, with VMware you can configure so that the first IDE unit is your sdx drive and it will work. So no problem at all.
Michel.
Michael, in my case i have an ide disk that is being called an scsi device by libata, so vmware does not know how to deal with it, i think... But I i would be very interested in your setup. How do you use the physical drives? Are they dual boot (direct boot and as vms thru vmware)? are the partitions full functioning operating systems when they are selected as vms or are they created after a physical disk is picked in a vm setup and then the os is installed? any other info would also be appreciated. if you think it goes off topic, please email directly. thanks, d.
Hello, In my config, I've 3 drives : - the first one is SATA called sda, on it you find multiboot with Vista native and openSUSE native. - the second one is SATA called sda, on it you find Vista for VMware and data. - the third one is IDE called sdc First I don't use the same partition for Vista native and Vista VMware, since the two machines have different hardware the activation will be broken each time you switch between the two machines. My VMware partition was installed from scratch on the SCSI disk, in order to do that you've to load the VMware SCSI driver at install. Second, as I think that the geometry of IDE disks and SATA could be different, I always declare my IDE disks as IDE even if they are called sdx by Linux. In order to do that I've the following lines in my config file : ______________________________________ floppy0.present = "TRUE" floppy0.fileName = "/home/mgarnier/vmware/Vista/vmscsi-1.2.0.4.flp" floppy0.fileName = "/dev/fd0" # this is in order to be able to load the vmware scsi driver at install #floppy0.fileType = "file" #floppy0.fileType = "device" ide0:0.present = "TRUE" ide0:0.fileName = "/dev/sr0" # DVDRW ide0:0.deviceType = "cdrom-raw" ide0:0.startConnected = "TRUE" ide0:1.present = "TRUE" ide0:1.fileName = "/dev/sr1" # DVD ide0:1.deviceType = "cdrom-raw" ide0:1.startConnected = "TRUE" ide1:0.present = "TRUE" ide1:0.fileName = "sdcpart" # IDE disk ide1:0.mode = "independent-persistent" ide1:0.deviceType = "rawDisk" scsi0.present = "TRUE" scsi0.virtualdev = "lsilogic" scsi0:0.present = "TRUE" scsi0:0.fileName = "/dev/sg7" # SCSI CDROM scsi0:0.writeThrough = "TRUE" scsi0:0.deviceType = "scsi-passthru" scsi0:1.present = "TRUE" scsi0:1.fileName = "sdbpart" # SATA disk 2, note that it is declared first since the active VMware partition is on it scsi0:1.mode = "independent-persistent" scsi0:1.deviceType = "rawDisk" scsi0:1.redo = "" scsi0:2.present = "TRUE" scsi0:2.fileName = "sdapart" # SATA disk 1 scsi0:2.mode = "independent-persistent" scsi0:2.deviceType = "rawDisk" scsi0:2.redo = "" I boot on a SATA disk, but you can choose to install the system on the IDE disk as well. Hoping it'll help. Regards Michel. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (14)
-
Aaron Kulkis
-
Bob S
-
Carlos E. R.
-
Catimimi
-
Chris Worley
-
Cristian Rodríguez
-
Eberhard Roloff
-
Greg Freemyer
-
James Knott
-
Jc Polanycia
-
Jerry Feldman
-
kanenas@hawaii.rr.com
-
Randall R Schulz
-
Russell Jones