[opensuse] Hard disk layout adviice
I have inherited a couple of computer systems with 6 or 7 2TB disks each. I will use one (7 disks) as my work computer. In addition to running openSUSE 12.3 (64-bit), I plan on installing the 32-bit version. I also would like to have the ability to add a couple more future OS versions. All this I plan on doing on one of the disks. That leaves me with 6 2TB disks. I honestly do not need that amount of storage. So I am thinking that I can perhaps set up some redundancy (probably via software - not sure yet what the hardware supports). At the same time, I would like to have the disks in a single logical volume. I have used LVM and RAID separately. I have never used them together. So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way? Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
I have inherited a couple of computer systems with 6 or 7 2TB disks each. I will use one (7 disks) as my work computer. In addition to running openSUSE 12.3 (64-bit), I plan on installing the 32-bit version. I also would like to have the ability to add a couple more future OS versions. All this I plan on doing on one of the disks. That leaves me with 6 2TB disks. I honestly do not need that amount of storage. So I am thinking that I can perhaps set up some redundancy (probably via software - not sure yet what the hardware supports). At the same time, I would like to have the disks in a single logical volume.
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
LVM on top of RAID. -- Per Jessen, Zürich (1.1°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----Original Message----- From: Per Jessen <per@computer.org> To: opensuse@opensuse.org Subject: Re: [opensuse] Hard disk layout adviice Date: Mon, 08 Apr 2013 08:09:05 +0200 Roger Oberholtzer wrote:
I have inherited a couple of computer systems with 6 or 7 2TB disks each. I will use one (7 disks) as my work computer. In addition to running openSUSE 12.3 (64-bit), I plan on installing the 32-bit version. I also would like to have the ability to add a couple more future OS versions. All this I plan on doing on one of the disks. That leaves me with 6 2TB disks. I honestly do not need that amount of storage. So I am thinking that I can perhaps set up some redundancy (probably via software - not sure yet what the hardware supports). At the same time, I would like to have the disks in a single logical volume.
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
LVM on top of RAID. -----Original Message----- More specifically, i do LVM on top of RAID10 (4*3TB) (flexibility + speed + availability) Though in your case, you might consider raid6 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2013-04-08 at 08:09 +0200, Per Jessen wrote:
Roger Oberholtzer wrote:
I have inherited a couple of computer systems with 6 or 7 2TB disks each. I will use one (7 disks) as my work computer. In addition to running openSUSE 12.3 (64-bit), I plan on installing the 32-bit version. I also would like to have the ability to add a couple more future OS versions. All this I plan on doing on one of the disks. That leaves me with 6 2TB disks. I honestly do not need that amount of storage. So I am thinking that I can perhaps set up some redundancy (probably via software - not sure yet what the hardware supports). At the same time, I would like to have the disks in a single logical volume.
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
LVM on top of RAID.
Figures. There is some RAID stuff in the BIOS, but I am not sure what it is about. It is a Gigabyte MB with an AMD CPU. The SATA stuff is 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40) 03:00.0 IDE interface: Marvell Technology Group Ltd. 88SE9172 SATA III 6Gb/s RAID Controller (rev 11) 04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA 6Gb/s Controller (rev 12) It is currently in AHCI mode. Amongst the things I am uncertain if I do RAID via the hardware/BIOS are: do all the disks on the SATA controller have to be part of the RAID? My plan was to have the system disk be a single disk, and have home be LVM+RAID. Might I be okay with doing software RAID? I have a system set up that was that seems to work fine. But I will be doing things like compiling that put a disk through the paces. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 08:09 +0200, Per Jessen wrote:
Roger Oberholtzer wrote:
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
LVM on top of RAID.
Figures. There is some RAID stuff in the BIOS, but I am not sure what it is about. It is a Gigabyte MB with an AMD CPU. The SATA stuff is
00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40) 03:00.0 IDE interface: Marvell Technology Group Ltd. 88SE9172 SATA III 6Gb/s RAID Controller (rev 11) 04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA 6Gb/s Controller (rev 12)
Those Marvell chips probably have RAID functionality.
It is currently in AHCI mode. Amongst the things I am uncertain if I do RAID via the hardware/BIOS are: do all the disks on the SATA controller have to be part of the RAID?
You would need to consult the manual.
My plan was to have the system disk be a single disk, and have home be LVM+RAID. Might I be okay with doing software RAID?
Absolutely.
I have a system set up that was that seems to work fine. But I will be doing things like compiling that put a disk through the paces.
Compiling is largely CPU-bound, if you want to stress-test your disks, you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out. /Per -- Per Jessen, Zürich (5.2°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Compiling is largely CPU-bound, if you want to stress-test your disks, you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out.
/Per
Per, I haven't been following this thread but fio was added to 12.3 and is a great stress tester. It requires you have a job control file. Example files should be in /usr/share/doc/packages/fio. If you would find fio useful and examples aren't included, please open a bugzilla and assign it to me. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
Compiling is largely CPU-bound, if you want to stress-test your disks, you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out.
/Per
Per,
I haven't been following this thread but fio was added to 12.3 and is a great stress tester. It requires you have a job control file. Example files should be in /usr/share/doc/packages/fio.
Thanks Greg very interesting, but a bit of a _big_ gun for me :-) I wanted to install it on a test system: # zypper in fio Loading repository data... Reading installed packages... Resolving package dependencies... The following NEW packages are going to be installed: bundle-lang-gnome-en bundle-lang-gnome-extras-en cantarell-fonts cups-libs fio fontconfig fuse gcr-data gcr-prompter gcr-viewer gd gdk-pixbuf-query-loaders glib-networking gnome-icon-theme gnome-icon-theme-extras gnome-icon-theme-symbolic gnome-keyring gnome-keyring-pam gnuplot gptfdisk gsettings-desktop-schemas gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk2-metatheme-adwaita gtk2-theming-engine-adwaita gtk2-tools gtk3-branding-openSUSE gtk3-data gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gtk3-metatheme-adwaita gtk3-theming-engine-adwaita gtk3-tools gvfs gvfs-backend-afc gvfs-backends gvfs-fuse hicolor-icon-theme hicolor-icon-theme-branding-openSUSE libarchive12 libasound2 libatasmart4 libatk-1_0-0 libatk-bridge-2_0-0 libatspi0 libavahi-client3 libavahi-common3 libavahi-glib1 libbluetooth3 libbluray1 libcairo2 libcairo-gobject2 libcdio14 libcdio_cdda1 libcdio_paranoia1 libcolord1 libdrm2 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libexif12 libfreetype6 libfuse2 libgck-1-0 libgck-modules-gnome-keyring libgcr-3-1 libgdk_pixbuf-2_0-0 libgnome-keyring0 libgphoto2-6 libgthread-2_0-0 libgtk-2_0-0 libgtk-3-0 libgvfscommon0 libharfbuzz0 libICE6 libicu49 libjasper1 libjbig2 libjpeg8 liblcms1 liblcms2-2 liblockdev1 libltdl7 liblua5_2 libmng1 libmysqlclient18 libnscd libopenobex1 libpango-1_0-0 libpciaccess0 libpixman-1-0 libpng15-15 libqt4 libqt4-sql libqt4-sql-mysql libqt4-x11 libsecret-1-0 libSM6 libsmbclient0 libsoup-2_4-1 libsqlite3-0 libtalloc2 libtdb1 libtiff5 libudisks2-0 libwbclient0 libwx_baseu-2_8-0-stl libwx_gtk2u_core-2_8-0-stl libX11-xcb1 libxcb-glx0 libxcb-render0 libxcb-shm0 libXcomposite1 libXcursor1 libXdamage1 libXext6 libXfixes3 libXft2 libXi6 libXinerama1 libXpm4 libXrandr2 libXrender1 libXxf86vm1 lockdev Mesa Mesa-libGL1 Mesa-libglapi0 metatheme-adwaita-common obex-data-server pango-tools udisks2 wxWidgets-lang The following recommended packages were automatically selected: bundle-lang-gnome-en cantarell-fonts gcr-viewer gnome-keyring gnome-keyring-pam gnuplot gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk3-branding-openSUSE gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gvfs gvfs-backend-afc gvfs-backends gvfs-fuse libqt4-sql-mysql obex-data-server udisks2 wxWidgets-lang 139 new packages to install. Overall download size: 76.0 MiB. After the operation, additional 315.0 MiB will be used.
If you would find fio useful and examples aren't included, please open a bugzilla and assign it to me.
Is there a CLI-only version? I'll try building it from source and see how far I get. -- Per Jessen, Zürich (8.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
Compiling is largely CPU-bound, if you want to stress-test your
disks,
you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out.
/Per
Per,
I haven't been following this thread but fio was added to 12.3 and is a great stress tester. It requires you have a job control file. Example files should be in /usr/share/doc/packages/fio.
Thanks Greg
very interesting, but a bit of a _big_ gun for me :-) I wanted to install it on a test system:
# zypper in fio Loading repository data... Reading installed packages... Resolving package dependencies...
The following NEW packages are going to be installed: bundle-lang-gnome-en bundle-lang-gnome-extras-en cantarell-fonts cups-libs fio fontconfig fuse gcr-data gcr-prompter gcr-viewer gd gdk-pixbuf-query-loaders glib-networking gnome-icon-theme gnome-icon-theme-extras gnome-icon-theme-symbolic gnome-keyring gnome-keyring-pam gnuplot gptfdisk gsettings-desktop-schemas gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk2-metatheme-adwaita gtk2-theming-engine-adwaita gtk2-tools gtk3-branding-openSUSE gtk3-data gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gtk3-metatheme-adwaita gtk3-theming-engine-adwaita gtk3-tools gvfs gvfs-backend-afc gvfs-backends gvfs-fuse hicolor-icon-theme hicolor-icon-theme-branding-openSUSE libarchive12 libasound2 libatasmart4 libatk-1_0-0 libatk-bridge-2_0-0 libatspi0 libavahi-client3 libavahi-common3 libavahi-glib1 libbluetooth3 libbluray1 libcairo2 libcairo-gobject2 libcdio14 libcdio_cdda1 libcdio_paranoia1 libcolord1 libdrm2 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libexif12 libfreetype6 libfuse2 libgck-1-0 libgck-modules-gnome-keyring libgcr-3-1 libgdk_pixbuf-2_0-0 libgnome-keyring0 libgphoto2-6 libgthread-2_0-0 libgtk-2_0-0 libgtk-3-0 libgvfscommon0 libharfbuzz0 libICE6 libicu49 libjasper1 libjbig2 libjpeg8 liblcms1 liblcms2-2 liblockdev1 libltdl7 liblua5_2 libmng1 libmysqlclient18 libnscd libopenobex1 libpango-1_0-0 libpciaccess0 libpixman-1-0 libpng15-15 libqt4 libqt4-sql libqt4-sql-mysql libqt4-x11 libsecret-1-0 libSM6 libsmbclient0 libsoup-2_4-1 libsqlite3-0 libtalloc2 libtdb1 libtiff5 libudisks2-0 libwbclient0 libwx_baseu-2_8-0-stl libwx_gtk2u_core-2_8-0-stl libX11-xcb1 libxcb-glx0 libxcb-render0 libxcb-shm0 libXcomposite1 libXcursor1 libXdamage1 libXext6 libXfixes3 libXft2 libXi6 libXinerama1 libXpm4 libXrandr2 libXrender1 libXxf86vm1 lockdev Mesa Mesa-libGL1 Mesa-libglapi0 metatheme-adwaita-common obex-data-server pango-tools udisks2 wxWidgets-lang
The following recommended packages were automatically selected: bundle-lang-gnome-en cantarell-fonts gcr-viewer gnome-keyring gnome-keyring-pam gnuplot gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk3-branding-openSUSE gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gvfs gvfs-backend-afc gvfs-backends gvfs-fuse libqt4-sql-mysql obex-data-server udisks2 wxWidgets-lang
139 new packages to install. Overall download size: 76.0 MiB. After the operation, additional 315.0 MiB will be used.
If you would find fio useful and examples aren't included, please open a bugzilla and assign it to me.
Is there a CLI-only version? I'll try building it from source and see how far I get.
Something really weird is going on. As far as I know it doesn't have all those dependecies. Give 30 min to take a look. Fyi: i'm the maintainer for fio. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, Apr 8, 2013 at 9:40 AM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
Compiling is largely CPU-bound, if you want to stress-test your
disks,
you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out.
/Per
Per,
I haven't been following this thread but fio was added to 12.3 and is a great stress tester. It requires you have a job control file. Example files should be in /usr/share/doc/packages/fio.
Thanks Greg
very interesting, but a bit of a _big_ gun for me :-) I wanted to install it on a test system:
# zypper in fio Loading repository data... Reading installed packages... Resolving package dependencies...
The following NEW packages are going to be installed: bundle-lang-gnome-en bundle-lang-gnome-extras-en cantarell-fonts cups-libs fio fontconfig fuse gcr-data gcr-prompter gcr-viewer gd gdk-pixbuf-query-loaders glib-networking gnome-icon-theme gnome-icon-theme-extras gnome-icon-theme-symbolic gnome-keyring gnome-keyring-pam gnuplot gptfdisk gsettings-desktop-schemas gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk2-metatheme-adwaita gtk2-theming-engine-adwaita gtk2-tools gtk3-branding-openSUSE gtk3-data gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gtk3-metatheme-adwaita gtk3-theming-engine-adwaita gtk3-tools gvfs gvfs-backend-afc gvfs-backends gvfs-fuse hicolor-icon-theme hicolor-icon-theme-branding-openSUSE libarchive12 libasound2 libatasmart4 libatk-1_0-0 libatk-bridge-2_0-0 libatspi0 libavahi-client3 libavahi-common3 libavahi-glib1 libbluetooth3 libbluray1 libcairo2 libcairo-gobject2 libcdio14 libcdio_cdda1 libcdio_paranoia1 libcolord1 libdrm2 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libexif12 libfreetype6 libfuse2 libgck-1-0 libgck-modules-gnome-keyring libgcr-3-1 libgdk_pixbuf-2_0-0 libgnome-keyring0 libgphoto2-6 libgthread-2_0-0 libgtk-2_0-0 libgtk-3-0 libgvfscommon0 libharfbuzz0 libICE6 libicu49 libjasper1 libjbig2 libjpeg8 liblcms1 liblcms2-2 liblockdev1 libltdl7 liblua5_2 libmng1 libmysqlclient18 libnscd libopenobex1 libpango-1_0-0 libpciaccess0 libpixman-1-0 libpng15-15 libqt4 libqt4-sql libqt4-sql-mysql libqt4-x11 libsecret-1-0 libSM6 libsmbclient0 libsoup-2_4-1 libsqlite3-0 libtalloc2 libtdb1 libtiff5 libudisks2-0 libwbclient0 libwx_baseu-2_8-0-stl libwx_gtk2u_core-2_8-0-stl libX11-xcb1 libxcb-glx0 libxcb-render0 libxcb-shm0 libXcomposite1 libXcursor1 libXdamage1 libXext6 libXfixes3 libXft2 libXi6 libXinerama1 libXpm4 libXrandr2 libXrender1 libXxf86vm1 lockdev Mesa Mesa-libGL1 Mesa-libglapi0 metatheme-adwaita-common obex-data-server pango-tools udisks2 wxWidgets-lang
The following recommended packages were automatically selected: bundle-lang-gnome-en cantarell-fonts gcr-viewer gnome-keyring gnome-keyring-pam gnuplot gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk3-branding-openSUSE gtk3-immodule-amharic gtk3-immodule-inuktitut gtk3-immodule-thai gtk3-immodule-vietnamese gvfs gvfs-backend-afc gvfs-backends gvfs-fuse libqt4-sql-mysql obex-data-server udisks2 wxWidgets-lang
139 new packages to install. Overall download size: 76.0 MiB. After the operation, additional 315.0 MiB will be used.
If you would find fio useful and examples aren't included, please open a bugzilla and assign it to me.
Is there a CLI-only version? I'll try building it from source and see how far I get.
Something really weird is going on. As far as I know it doesn't have all those dependecies. Give 30 min to take a look.
Per, I think the issue is that fio "recommends gnuplot". Try to install it with zypper in --no-recommends fio (I've got gnuplot installed, so I can't easily verify that's the issue.) fyi: this is the requires list which doesn't seem to have any graphic libs in t it to me: libc.so.6 libc.so.6(GLIBC_2.0) libc.so.6(GLIBC_2.1.3) libc.so.6(GLIBC_2.4) /bin/sh libc.so.6(GLIBC_2.3.4) libc.so.6(GLIBC_2.1) libc.so.6(GLIBC_2.3) libc.so.6(GLIBC_2.2) libpthread.so.0 libpthread.so.0(GLIBC_2.0) libm.so.6 libc.so.6(GLIBC_2.7) libm.so.6(GLIBC_2.0) libdl.so.2 libm.so.6(GLIBC_2.1) libpthread.so.0(GLIBC_2.2) libdl.so.2(GLIBC_2.0) libc.so.6(GLIBC_2.11) libdl.so.2(GLIBC_2.1) libpthread.so.0(GLIBC_2.1) libpthread.so.0(GLIBC_2.3.2) librt.so.1 librt.so.1(GLIBC_2.2) libc.so.6(GLIBC_2.3.3) libc.so.6(GLIBC_2.6) libc.so.6(GLIBC_2.5) libaio.so.1 libaio.so.1(LIBAIO_0.1) libaio.so.1(LIBAIO_0.4) librt.so.1(GLIBC_2.1) rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsLzma) <= 4.4.6-1 Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
Per,
I think the issue is that fio "recommends gnuplot". Try to install it with
zypper in --no-recommends fio
(I've got gnuplot installed, so I can't easily verify that's the issue.)
That did it, much better. For you as the packager - is that Recommends really right? I mean, assuming fio is what I think it is, won't people be gathering data on one machine and analyse them on another? -- Per Jessen, Zürich (4.2°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
Per,
I think the issue is that fio "recommends gnuplot". Try to install it with
zypper in --no-recommends fio
(I've got gnuplot installed, so I can't easily verify that's the issue.)
That did it, much better.
For you as the packager - is that Recommends really right? I mean, assuming fio is what I think it is, won't people be gathering data on one machine and analyse them on another?
Good question. I added it to factory/12.3 because xfstests uses it as of 6 months ago and xfstests is used by the opensuse automated testing during factory development (afaik). Xfstests does not use gnuplot. I may drop the recommends for factory/13.1. For 12.3 I will leave it as is. I doubt there are enough users to bother with a update. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Apr 9, 2013 at 8:58 AM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
Per,
I think the issue is that fio "recommends gnuplot". Try to install it with
zypper in --no-recommends fio
(I've got gnuplot installed, so I can't easily verify that's the issue.)
That did it, much better.
For you as the packager - is that Recommends really right? I mean, assuming fio is what I think it is, won't people be gathering data on one machine and analyse them on another?
Good question. I added it to factory/12.3 because xfstests uses it as of 6 months ago and xfstests is used by the opensuse automated testing during factory development (afaik).
Xfstests does not use gnuplot.
I may drop the recommends for factory/13.1. For 12.3 I will leave it as is. I doubt there are enough users to bother with a update.
Per, I asked on the packaging lists what people thought. I got a couple responses that I should change it from "Recommends" to "Suggests". Suggests is actually new to me and I only found it displayed if I manually use the "dependencies" tab of yast. That is a very, very weak "suggestion" in my mind, but it is apparently the opensuse way. OTOH, I got this response you might want to think about: === On a server, the thorough admin has "installRecommends = no" in /etc/zypper.conf or he/she/* uses "zypper in --no-recommends fio". I wouldn't change the recommendation in this case. Many of the default distro packages/patterns pull in packages that aren't needed on servers (hundreds in 12.1 and before; there seemed to be a lot less in 12.2; for 12.3, I haven't created my private list yet). === I think I will change it to a "Suggests", but as I said that is effectively removing it from the specfile. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
On Tue, Apr 9, 2013 at 8:58 AM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
For you as the packager - is that Recommends really right? I mean, assuming fio is what I think it is, won't people be gathering data on one machine and analyse them on another?
Good question. I added it to factory/12.3 because xfstests uses it as of 6 months ago and xfstests is used by the opensuse automated testing during factory development (afaik).
Xfstests does not use gnuplot.
I may drop the recommends for factory/13.1. For 12.3 I will leave it as is. I doubt there are enough users to bother with a update.
Per,
I asked on the packaging lists what people thought.
I got a couple responses that I should change it from "Recommends" to "Suggests".
Suggests is actually new to me and I only found it displayed if I manually use the "dependencies" tab of yast. That is a very, very weak "suggestion" in my mind, but it is apparently the opensuse way.
Recommends/Suggests - a pretty subtle difference. I'll have to look for it in YaST, don't think I've ever seen it.
OTOH, I got this response you might want to think about:
=== On a server, the thorough admin has "installRecommends = no" in /etc/zypper.conf or he/she/* uses "zypper in --no-recommends fio".
Interesting, I didn't know about that one either. I guess Recommends isn't really used a lot - or perhaps I just don't hit it very often.
I wouldn't change the recommendation in this case. Many of the default distro packages/patterns pull in packages that aren't needed on servers (hundreds in 12.1 and before; there seemed to be a lot less in 12.2; for 12.3, I haven't created my private list yet). ===
For a server I always install the minimal pattern, it really is a tiny install. Too small for my taste even, I also have my private list.
I think I will change it to a "Suggests", but as I said that is effectively removing it from the specfile.
Without having thought much about it, "Suggests" for gnuplot sounds like the better idea - after all, for charting, you could also use open-/libreOffice and you wouldn't want to recommend that for fio. -- Per Jessen, Zürich (9.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 04/08/2013 06:33 AM, Per Jessen pecked at the keyboard and wrote:
Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 08:09 +0200, Per Jessen wrote:
Roger Oberholtzer wrote:
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
LVM on top of RAID.
Figures. There is some RAID stuff in the BIOS, but I am not sure what it is about. It is a Gigabyte MB with an AMD CPU. The SATA stuff is
00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40) 03:00.0 IDE interface: Marvell Technology Group Ltd. 88SE9172 SATA III 6Gb/s RAID Controller (rev 11) 04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA 6Gb/s Controller (rev 12)
Those Marvell chips probably have RAID functionality.
It is currently in AHCI mode. Amongst the things I am uncertain if I do RAID via the hardware/BIOS are: do all the disks on the SATA controller have to be part of the RAID?
You would need to consult the manual.
My plan was to have the system disk be a single disk, and have home be LVM+RAID. Might I be okay with doing software RAID?
Absolutely.
I have a system set up that was that seems to work fine. But I will be doing things like compiling that put a disk through the paces.
Compiling is largely CPU-bound, if you want to stress-test your disks, you'll want more IO, and in parallel. I've got a script somewhere, I'll see if I dig it out.
/Per
According to the data sheet: http://pdf1.alldatasheet.com/datasheet-pdf/view/317088/MARVELL/88SE9120.html it does not do hardware raid, bummer. That would be the way to go for easier setup (my opinion). -- Ken Schneider SuSe since Version 5.2, June 1998 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 08 Apr 2013, Roger Oberholtzer wrote:
I have inherited a couple of computer systems with 6 or 7 2TB disks each. I will use one (7 disks) as my work computer. In addition to running openSUSE 12.3 (64-bit), I plan on installing the 32-bit version. I also would like to have the ability to add a couple more future OS versions. All this I plan on doing on one of the disks. That leaves me with 6 2TB disks. I honestly do not need that amount of storage. So I am thinking that I can perhaps set up some redundancy (probably via software - not sure yet what the hardware supports). At the same time, I would like to have the disks in a single logical volume.
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
Consider whether you might want to allocate some of the space to an online backup. I use one disk as a daily rsync based backup. I do this because most often data loss is due to something I've done, such as accidentally deleting files or painting myself into a corner while coding. I can quickly consult/compare/revert-to yesterdays version of a file or set of directories. I use a command similar to the following: datestamp=`date +%Y%m%d-%H%M%S` rsync -a --one-file-system --delete -b -HA -X --sparse --backup-dir=pastDir/$datestamp /home/ /mnt/backup/home/ by using --backup-dir I achieve a history of changes. There are now more full blown Apply timemachine like backup systems available, but the simplicity of rsync suits me for now. I also separately backup the OS in a similar fashion, so that I can backout any updates or distribution updates should they go horribly wrong. Plus I can see what has changed if I think there is a problem. I used to use RAID, but I now think losing a day is OK. Instread I've used spare disks for a weekly rsync to drives that I remove off site - fire or flood protection. As you've suggested I keep a few spare OS partitions around - although I'm using these less often now that I've started using vboxes. But I would think a couple of spares would be useful. I used to use logical volumes, but the extra complexity was a bit of a pain. Because changes to the logical volumes were so infrequent, I found I was always having to relearn what I had forgotten since last time - I did keep notes, but not in sufficient detail. Now disks are so big I just keep it simple, no more multi-disk logical volumes (I once had one disk in a logical volume go bad without warning, nasty). Your needs might be different, but I thought I'd offer some food for thought. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
I have done that a couple of times. Make one RAID 5 array and then use LVM to split it up into the desired partitions. One thing to bear in mind is that /boot cannot be on any RAID other than RAID 1. This can be done in Yast, but it's not exactly obvious how, so you may want to make a couple of practice attempts. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 04/08/2013 08:13 AM, James Knott pecked at the keyboard and wrote:
Roger Oberholtzer wrote:
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
I have done that a couple of times. Make one RAID 5 array and then use LVM to split it up into the desired partitions. One thing to bear in mind is that /boot cannot be on any RAID other than RAID 1. This can be done in Yast, but it's not exactly obvious how, so you may want to make a couple of practice attempts.
This is where hardware RAID has an advantage. -- Ken Schneider SuSe since Version 5.2, June 1998 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Ken Schneider - openSUSE wrote:
On 04/08/2013 08:13 AM, James Knott pecked at the keyboard and wrote:
Roger Oberholtzer wrote:
I have used LVM and RAID separately. I have never used them together.
So, how to proceed? Can I achieve this setup via YaST? Which gets made first? The LV, or the RAID? Or can this combination even be done? Or is there a better way?
I have done that a couple of times. Make one RAID 5 array and then use LVM to split it up into the desired partitions. One thing to bear in mind is that /boot cannot be on any RAID other than RAID 1. This can be done in Yast, but it's not exactly obvious how, so you may want to make a couple of practice attempts.
This is where hardware RAID has an advantage.
Actually, the advantages of a hardware RAID controller are more about performance and write-caching. The configuration effort for hardware or software RAID is probably about the same. -- Per Jessen, Zürich (4.0°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2013-04-08 at 07:55 +0200, Roger Oberholtzer wrote:
I have inherited a couple of computer systems with 6 or 7 2TB disks each.
Nice! Check the age of the disks (thousands of hours). You can see it in smartctl output. I would also run the long test on all of them. - -- Cheers, Carlos E. R. (from 12.1 x86_64 "Asparagus" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) iEYEARECAAYFAlFitIAACgkQtTMYHG2NR9VDLwCfZNkXnhT9eywF/k+O42z7AV/J jekAn1dkCsIKvUqSUD8UruW+E7khtkhh =/9Wc -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years. I have opted for Raid Level 1 + LVM, both done in software. I will play with this setup a bit and see how it feels. Doing the Raid in the BIOS was a bit tricky as it seemed limited to 4 disks internally and 2 externally. Unless I misunderstood the MB manual. I am running the long tests on all disks. We will see if there are any errors... Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2013-04-08 at 14:27 +0200, Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
Wow! Are they enterprise class? - -- Cheers, Carlos E. R. (from 12.1 x86_64 "Asparagus" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) iEYEARECAAYFAlFi0U4ACgkQtTMYHG2NR9XRQACeMCqhK3/+cGgDUfKlJBcG+FmU rSQAniIut9g0W/oyJZinXUwB9XM8kqRP =ZNUm -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2013-04-08 at 16:16 +0200, Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2013-04-08 at 14:27 +0200, Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
Wow! Are they enterprise class?
The 'new' ones are Seagate (ST2000DL003-9VT166) and Western Digital (WDC WD20EARX-00PASB0). The WD are going in the RAID+LVM. The Seagate will be the system disk. The main thing I do not like about the RAID setup is that the disks all look like they came off the assembly line together. The older disk is a Seagate ST3250624AS. Just keeps running. These systems are never off. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2013-04-08 at 16:26 +0200, Roger Oberholtzer wrote:
The older disk is a Seagate ST3250624AS. Just keeps running.
That one is enterprise class. 163€ currently, 2TB. <http://www.alternate.es/html/search.html?searchCriteria=ST3250624AS&x=0&y=0> - -- Cheers, Carlos E. R. (from 12.1 x86_64 "Asparagus" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) iEUEARECAAYFAlFjQWgACgkQtTMYHG2NR9Ut5gCXQmyArqIAhWifvOTh/3L5jbFj 4gCfVRd3XuIN0p+d7qwkLK2H9x/ucPI= =sQ3K -----END PGP SIGNATURE-----
Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2013-04-08 at 16:26 +0200, Roger Oberholtzer wrote:
The older disk is a Seagate ST3250624AS. Just keeps running.
That one is enterprise class. 163€ currently, 2TB.
When I google it, it looks like it's a 250Gb drive - also, I only see a 3-year warranty mentioned, which doesn't suggest enterprise-class to me. http://knowledge.seagate.com/articles/de/FAQ/206571de?language=de -- Per Jessen, Zürich (5.7°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 09:07 +0200, Per Jessen wrote:
Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2013-04-08 at 16:26 +0200, Roger Oberholtzer wrote:
The older disk is a Seagate ST3250624AS. Just keeps running.
That one is enterprise class. 163€ currently, 2TB.
When I google it, it looks like it's a 250Gb drive - also, I only see a 3-year warranty mentioned, which doesn't suggest enterprise-class to me.
http://knowledge.seagate.com/articles/de/FAQ/206571de?language=de
Well, the disk is soon to be retired. It has served me well. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday, 2013-04-09 at 09:07 +0200, Per Jessen wrote:
Carlos E. R. wrote:
The older disk is a Seagate ST3250624AS. Just keeps running.
That one is enterprise class. 163€ currently, 2TB.
When I google it, it looks like it's a 250Gb drive - also, I only see a 3-year warranty mentioned, which doesn't suggest enterprise-class to me.
http://knowledge.seagate.com/articles/de/FAQ/206571de?language=de
Then my search found the wrong string :-( - -- Cheers, Carlos E. R. (from 12.1 x86_64 "Asparagus" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) iEYEARECAAYFAlFkNdoACgkQtTMYHG2NR9WI4ACePHMPPOGIvyhz5f7aEgvB2E0S KmkAnisD++1G8EZvxRkCxz58SQoMCgzm =AniI -----END PGP SIGNATURE-----
Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 16:16 +0200, Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2013-04-08 at 14:27 +0200, Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
Wow! Are they enterprise class?
The 'new' ones are Seagate (ST2000DL003-9VT166) and Western Digital (WDC WD20EARX-00PASB0). The WD are going in the RAID+LVM. The Seagate will be the system disk. The main thing I do not like about the RAID setup is that the disks all look like they came off the assembly line together.
But you'll be using the systems for yourself as your work computer - you probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4. -- Per Jessen, Zürich (4.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 08:20 +0200, Per Jessen wrote:
But you'll be using the systems for yourself as your work computer - you probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4.
I agree that I have had very few disk failures. And really important things like source code and all are saved in a remote repository (also RAID, with a nightly mirror to yet another remote location in another city...) But I have always felt I was tempting fate. What is the significance of WDC Green vs RE4? Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
On Tue, 2013-04-09 at 08:20 +0200, Per Jessen wrote:
But you'll be using the systems for yourself as your work computer - you probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4.
I agree that I have had very few disk failures. And really important things like source code and all are saved in a remote repository (also RAID, with a nightly mirror to yet another remote location in another city...) But I have always felt I was tempting fate.
What is the significance of WDC Green vs RE4?
The latter are enterprise class drives, meant for full duty-cycle. (RE = RAID edition). 1.2 million hours MTBF. I have some WDC Green 3TB drives somewhere, only maybe two years old, at least one was already replaced. -- Per Jessen, Zürich (6.1°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen wrote:
What is the significance of WDC Green vs RE4?
The latter are enterprise class drives, meant for full duty-cycle. (RE = RAID edition). 1.2 million hours MTBF.
I have some WDC Green 3TB drives somewhere, only maybe two years old, at least one was already replaced.
Also be aware of the timeout issue with WD drives when used in a RAID. If they are not enterprise class they have a nasty habit of detaching from the RAID, IIRC. WD broke the ATA spec to implement that gotcha, which is why I no longer buy WD products. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave Howorth wrote:
Per Jessen wrote:
What is the significance of WDC Green vs RE4?
The latter are enterprise class drives, meant for full duty-cycle. (RE = RAID edition). 1.2 million hours MTBF.
I have some WDC Green 3TB drives somewhere, only maybe two years old, at least one was already replaced.
Also be aware of the timeout issue with WD drives when used in a RAID. If they are not enterprise class they have a nasty habit of detaching from the RAID, IIRC.
Yes, that is correct. I think it can be fixed though, google can probably help with that. -- Per Jessen, Zürich (7.3°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 11:00 +0100, Dave Howorth wrote:
Per Jessen wrote:
What is the significance of WDC Green vs RE4?
The latter are enterprise class drives, meant for full duty-cycle. (RE = RAID edition). 1.2 million hours MTBF.
I have some WDC Green 3TB drives somewhere, only maybe two years old, at least one was already replaced.
Also be aware of the timeout issue with WD drives when used in a RAID. If they are not enterprise class they have a nasty habit of detaching from the RAID, IIRC. WD broke the ATA spec to implement that gotcha, which is why I no longer buy WD products.
I know that, left to our own devices, we tend towards Seagate. Same with choosing nvidia over ATI, or intel over 3com or AMD. But the computer that landed here somehow chose just the components we tend not to choose: CPU = AMD Graphics = ATI Disks = WD So, part of my task is, I guess, getting to know the devil as we define him/her. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer <roger@opq.se> wrote:
On Tue, 2013-04-09 at 08:20 +0200, Per Jessen wrote:
But you'll be using the systems for yourself as your work computer - you probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4.
I agree that I have had very few disk failures. And really important things like source code and all are saved in a remote repository (also RAID, with a nightly mirror to yet another remote location in another city...) But I have always felt I was tempting fate.
What is the significance of WDC Green vs RE4?
Green drives save electricity by parking the heads, but at least with the drives made a couple years ago, they would do it in the middle of real work at times. Thus green drives typically have the "frequent head unload" issue described at: https://ata.wiki.kernel.org/index.php/Known_issues The storage-fixup package should resolve the issue by disabling the power saving feature. (zypper in storage-fixup). If it doesn't make sure your drives are listed in the storage-fixup config files as needing a tweak. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
Green drives save electricity by parking the heads, but at least with the drives made a couple years ago, they would do it in the middle of real work at times.
Thus green drives typically have the "frequent head unload" issue described at: https://ata.wiki.kernel.org/index.php/Known_issues
The storage-fixup package should resolve the issue by disabling the power saving feature. (zypper in storage-fixup).
If it doesn't make sure your drives are listed in the storage-fixup config files as needing a tweak.
Message filed under important. Thanks for the pointer. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
zypper in storage-fixup
I have added storage-fixup and see that my disks are not in /etc/storage-fixup.conf. I can of course add them. But I am unclear what the entry would be. I am guessing the options are either -B 254 or -B 255. Any suggestions? The disk is: WDC WD20EARX-00P Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Apr 9, 2013 at 11:26 AM, Roger Oberholtzer <roger@opq.se> wrote:
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
zypper in storage-fixup
I have added storage-fixup and see that my disks are not in /etc/storage-fixup.conf. I can of course add them. But I am unclear what the entry would be. I am guessing the options are either -B 254 or -B 255. Any suggestions?
The disk is: WDC WD20EARX-00P
Man hdparm shows: -B Get/set Advanced Power Management feature, if the drive supports it. A low value means aggressive power management and a high value means better performance. Possible settings range from values 1 through 127 (which permit spin-down), and values 128 through 254 (which do not permit spin-down). The highest degree of power management is attained with a setting of 1, and the highest I/O performance with a setting of 254. A value of 255 tells hdparm to disable Advanced Power Management altogether on the drive (not all drives support disabling it, but most do). I would just disable APM altogether with 255, but if you want it to spindown overnight as an example you can try 254 and see if you have any problems. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
Roger Oberholtzer <roger@opq.se> wrote:
On Tue, 2013-04-09 at 08:20 +0200, Per Jessen wrote:
But you'll be using the systems for yourself as your work computer - you probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4.
I agree that I have had very few disk failures. And really important things like source code and all are saved in a remote repository (also RAID, with a nightly mirror to yet another remote location in another city...) But I have always felt I was tempting fate.
What is the significance of WDC Green vs RE4?
Green drives save electricity by parking the heads, but at least with the drives made a couple years ago, they would do it in the middle of real work at times.
Thus green drives typically have the "frequent head unload" issue described at: https://ata.wiki.kernel.org/index.php/Known_issues
The storage-fixup package should resolve the issue by disabling the power saving feature. (zypper in storage-fixup).
If it doesn't make sure your drives are listed in the storage-fixup config files as needing a tweak.
Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
I see the following for one of the drives (without storage-fixup): ata10: SATA link down (SStatus 0 SControl 310) ata10.00: link offline, clearing class 1 to NONE ata10: EH complete ata10: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata10: irq_stat 0x80000040, connection status changed ata10: SError: { CommWake DevExch } ata10: limiting SATA link speed to 1.5 Gbps ata10: hard resetting link or ata10: hard resetting link ata10: link is slow to respond, please be patient (ready=0) ata10: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ata10.00: failed to IDENTIFY (I/O error, err_mask=0x100) ata10: hard resetting link Could this perhaps be what happens? The thing is, there is no device node for the drive, so I am not sure how hdparm will be able to run. Or maybe this is a bad disk. Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer <roger@opq.se> wrote:
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
Roger Oberholtzer <roger@opq.se> wrote:
On Tue, 2013-04-09 at 08:20 +0200, Per Jessen wrote:
But you'll be using the systems for yourself as your work computer
you
probably don't really need that kind of reliability? Besides, the drives are WDC Green, not RE4.
I agree that I have had very few disk failures. And really important things like source code and all are saved in a remote repository (also RAID, with a nightly mirror to yet another remote location in another city...) But I have always felt I was tempting fate.
What is the significance of WDC Green vs RE4?
Green drives save electricity by parking the heads, but at least with
- the drives made a couple years ago, they would do it in the middle of real work at times.
Thus green drives typically have the "frequent head unload" issue
described at: https://ata.wiki.kernel.org/index.php/Known_issues
The storage-fixup package should resolve the issue by disabling the
power saving feature. (zypper in storage-fixup).
If it doesn't make sure your drives are listed in the storage-fixup
config files as needing a tweak.
Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
I see the following for one of the drives (without storage-fixup):
ata10: SATA link down (SStatus 0 SControl 310) ata10.00: link offline, clearing class 1 to NONE ata10: EH complete ata10: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata10: irq_stat 0x80000040, connection status changed ata10: SError: { CommWake DevExch } ata10: limiting SATA link speed to 1.5 Gbps ata10: hard resetting link
or
ata10: hard resetting link ata10: link is slow to respond, please be patient (ready=0) ata10: SATA link up 1.5 Gbps (SStatus 113 SControl 310) ata10.00: failed to IDENTIFY (I/O error, err_mask=0x100) ata10: hard resetting link
Could this perhaps be what happens? The thing is, there is no device node for the drive, so I am not sure how hdparm will be able to run.
Or maybe this is a bad disk.
An easy way to see if your drives are misbehaving with head unloads is to check the smart data. There is a field/counter for unloads. Iirc, the green drives are warranteed for 15 million head unloads which should be great. Trouble is with the issue there can be as many as 100,000 per day. So in 6 months you've worn out the drive. I don't recall it causing errors like you are seeing. Before suspecting a bad drive I always suspect a bad sata cable. I've seen dozens (if not hundreds) of weird sata comunications issues resolved by replacing the cables. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer said the following on 04/10/2013 07:32 AM:
I don't recall it causing errors like you are seeing. Before suspecting a bad drive I always suspect a bad sata cable. I've seen dozens (if not hundreds) of weird sata comunications issues resolved by replacing the cables.
Very much so. I had a series of machines (bulk purchase of desktops) where the cable would allows the DVD drive to play movies, burn DVDs, everything we tried EXCEPT install certain OSs from DVD. Live CDs would boot. The cable was a high speed cable. Eventually we found a cable that "worked". We never did figure out this oddity. Oh, and the one OS that would boot/install - KNOPPIX For some reason KNOPPIX seems to work everywhere I've tried under the most wacky hardware and conditions. -- We are not afraid to follow truth wherever it may lead, nor to tolerate any error so long as reason is left free to combat it. --Thomas Jefferson -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
For some reason KNOPPIX seems to work everywhere I've tried under the most wacky hardware and conditions.
Because Klaus Knopper puts so much effort into making it do that? It's the raison d'etre of his distro, after all. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2013-04-10 14:04 (GMT+0100) Dave Howorth composed:
Anton Aylward wrote:
For some reason KNOPPIX seems to work everywhere I've tried under the most wacky hardware and conditions.
Because Klaus Knopper puts so much effort into making it do that? It's the raison d'etre of his distro, after all.
+∞ -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2013-04-09 at 09:09 -0400, Greg Freemyer wrote:
The storage-fixup package should resolve the issue by disabling the power saving feature. (zypper in storage-fixup).
If it doesn't make sure your drives are listed in the storage-fixup config files as needing a tweak.
Seems this is a moot point, as I get this when I try -B 255 or -B 254: APM_level = not supported Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* Carlos E. R. <robin.listas@telefonica.net> [04-08-13 10:17]:
On Monday, 2013-04-08 at 14:27 +0200, Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
Wow! Are they enterprise class?
jfyi ( :^) ) Model Family: IBM Deskstar 25GP and 22GXP family Device Model: IBM-DJNA-352030 Serial Number: GQ0GQFD8088 Firmware Version: J58OA30K 14:37 wahoo:~ # smartctl --all /dev/sdd |grep Power_On_Hours 9 Power_On_Hours 0x0012 086 086 000 Old_age Always 102836 drive has a few days on it :^) -- (paka)Patrick Shanahan Plainfield, Indiana, USA HOG # US1244711 http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 http://en.opensuse.org openSUSE Community Member Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Patrick Shanahan wrote:
* Carlos E. R. <robin.listas@telefonica.net> [04-08-13 10:17]:
On Monday, 2013-04-08 at 14:27 +0200, Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
Wow! Are they enterprise class?
jfyi ( :^) )
Model Family: IBM Deskstar 25GP and 22GXP family Device Model: IBM-DJNA-352030 Serial Number: GQ0GQFD8088 Firmware Version: J58OA30K
14:37 wahoo:~ # smartctl --all /dev/sdd |grep Power_On_Hours 9 Power_On_Hours 0x0012 086 086 000 Old_age Always 102836
drive has a few days on it :^)
Almost 12 years. Either that number has to be read differently or that drive is way past it's due date :-) -- Per Jessen, Zürich (5.7°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
On Mon, 2013-04-08 at 14:13 +0200, Carlos E. R. wrote:
smartctl
Thanks for the pointer to this. On one machine Power_On_Time are between 6000 and 8000 hours. So about a year of being powered on. The disk they will be replacing has been on 41263 hours, or 4.7 years.
I have opted for Raid Level 1 + LVM, both done in software. I will play with this setup a bit and see how it feels.
Since you said that you have excess capacity (is that even theoretically possible? :) and that you wanted good compilation performance, have you considered RAID 10?
Doing the Raid in the BIOS was a bit tricky as it seemed limited to 4 disks internally and 2 externally. Unless I misunderstood the MB manual.
For RAID 1 or RAID 10, mdadm is as good as the hardware versions, AFAIK. For RAID 5 or RAID 6 it would load the CPU but then those modes are scary with big disks. Cheers, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (12)
-
Anton Aylward
-
Carlos E. R.
-
Dave Howorth
-
Felix Miata
-
Greg Freemyer
-
Hans Witvliet
-
James Knott
-
Ken Schneider - openSUSE
-
Michael Hamilton
-
Patrick Shanahan
-
Per Jessen
-
Roger Oberholtzer