[opensuse] 13.1 - error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD
The subject says it all: For example, rpm -qa | grep whatever, works fine, but throws the error related to #3822 Header: error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD 17:20 alchemy:~> rpm -q --querybynumber 3822 error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD <stuck in loop forever> 22:44 alchemy:~> rpm -qi --querybynumber 3823 Name : kdemultimedia3-jukebox Version : 3.5.10.1 Release : 49.7 Architecture: x86_64 Install Date: Tue 15 Sep 2015 01:06:40 AM CDT 22:44 alchemy:~> rpm -qi --querybynumber 3821 Name : util-linux Version : 2.23.2 Release : 31.1 Architecture: x86_64 Install Date: Sun 19 Jul 2015 01:31:21 PM CDT Group : System/Base Size : 3218862 License : GPL-2.0+ Hmm, how to fix? # rpmdb --rebuildb (didnt' work) # rpm -qi --querybynumber 3822 error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD error: rpmdbNextIterator: skipping h# 3822 Header V3 RSA/SHA256 Signature, key ID 3dbdc284: BAD <snip continual loop> # rpm -Uvh --replacefiles --replacepkgs \ /var/cache/zypp/packages/kde3/x86_64/kdemultimedia3-jukebox-3.5.10.1-49.7.x86_64.rpm That didn't work, either... # rpm -Uvh --oldpackage \ /var/cache/zypp/packages/download.opensuse.org-update/x86_64/util-linux-2.23.2-16.1.x86_64.rpm <snip> # rpm -Uvh \ /var/cache/zypp/packages/download.opensuse.org-update/x86_64/util-linux-2.23.2-31.1.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:util-linux-2.23.2-31.1 ################################# [ 50%] setting /usr/bin/wall to root:tty 2755. (wrong permissions 0755) setting /usr/bin/write to root:tty 2755. (wrong permissions 0755) setting /usr/bin/eject to root:audio 4755. (wrong permissions 4750) Cleaning up / removing... 2:util-linux-2.23.2-16.1 ################################# [100%] That didn't work, either... # rpmdb --rebuilddb error: rpmdbNextIterator: skipping h# 3822 Header SHA1 digest: BAD Expected(183996560b9f1b18475f66d3d5c3d7f496f4112d) != (febd610bb8d39c1a315562f61c122c74f99faf2d) Ahh... Fixed! Question -- Why wasn't this problem solved with the first 'rpmdb --rebuilddb'? -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
David C. Rankin composed on 2015-09-22 23:10 (UTC-0500): ...
# rpmdb --rebuilddb error: rpmdbNextIterator: skipping h# 3822 Header SHA1 digest: BAD Expected(183996560b9f1b18475f66d3d5c3d7f496f4112d) != (febd610bb8d39c1a315562f61c122c74f99faf2d)
Ahh... Fixed!
Question -- Why wasn't this problem solved with the first 'rpmdb --rebuilddb'?
This isn't on a filesystem with a small blocksize, is it? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/22/2015 11:23 PM, Felix Miata wrote:
# rpmdb --rebuilddb
error: rpmdbNextIterator: skipping h# 3822 Header SHA1 digest: BAD Expected(183996560b9f1b18475f66d3d5c3d7f496f4112d) != (febd610bb8d39c1a315562f61c122c74f99faf2d) Ahh... Fixed! Question -- Why wasn't this problem solved with the first 'rpmdb --rebuilddb'? This isn't on a filesystem with a small blocksize, is it?
Nope, this was a fresh install on one of the space-wasting 4k new-fangled 7200 rpm platters :-) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Sep 23, 2015 at 2:55 PM, David C. Rankin <drankinatty@suddenlinkmail.com> wrote:
On 09/22/2015 11:23 PM, Felix Miata wrote:
# rpmdb --rebuilddb
error: rpmdbNextIterator: skipping h# 3822 Header SHA1 digest: BAD Expected(183996560b9f1b18475f66d3d5c3d7f496f4112d) != (febd610bb8d39c1a315562f61c122c74f99faf2d) Ahh... Fixed! Question -- Why wasn't this problem solved with the first 'rpmdb --rebuilddb'?
This isn't on a filesystem with a small blocksize, is it?
Nope, this was a fresh install on one of the space-wasting 4k new-fangled 7200 rpm platters :-)
Uhh....Space Wasting? Linux went to a 4KB page for almost all filesystems over a decade ago. So the decision to waste space was made long ago. As far as I know, all 4K drives are 1TB or larger. Have you ever created a 1TB filesystem with 1KB pages? Can it even be done? Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/23/2015 02:38 PM, Greg Freemyer wrote:
Nope, this was a fresh install on one of the space-wasting 4k new-fangled
7200 rpm platters :-) Uhh....Space Wasting?
Linux went to a 4KB page for almost all filesystems over a decade ago.
So the decision to waste space was made long ago. As far as I know, all 4K drives are 1TB or larger. Have you ever created a 1TB filesystem with 1KB pages? Can it even be done?
Greg
All my 500G drives object ;-) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer composed on 2015-09-23 15:38 (UTC-0400):
David C. Rankin wrote:
Felix Miata wrote:
This isn't on a filesystem with a small blocksize, is it?
Nope, this was a fresh install on one of the space-wasting 4k new-fangled 7200 rpm platters :-)
Uhh....Space Wasting?
On a TW/KDE installation, search for *.png in /usr/ tree produced 15,766 hits. A tiny sample, /usr/share/emoticons/Breeze, has 29,027 bytes in 35 files. Moving those 35 files off a filesystem with 1k blocksize frees 43,008 bytes, 42 1k blocks, a wasted space ratio of 32.5%. Moving them onto a filesystem with 4k blocksize consumes 36 4k blocks, 147,456 bytes, a wasted space ratio of 80.3%. On same installation, /usr/share/kf5/locale/countries has 241 directories and 655 bytes in 20 files. Each of those 241 directories contains two small files, about half of which are less than 512 bytes, and few of which are more than 1024 bytes. du -s says this group 743 blocks on a 1k block filesystem, so 246,784 bytes. Moving them off the 1k block filesystem freed 760,832 bytes, onto a 4k block filesystem, using 734 4k blocks, 3,006,464 bytes, a 3.95:1 ratio between consumption on the two filesystems. IIRC, last I compared freespace rsyncing a root partition on a 4k block 4.8G filesystem onto a 1k block 4.8G filesystem saved somewhere in the neighborhood of 15% of freespace difference between the two. Based on these observations, it's easy to see at least potential loss of space using 4k instead of smaller blocksize is a lot bigger than inconsequential. It's a seriously good thing storage densities continue to rise, and unit cost continues downward, but they don't necessarily interplay nicely with smaller sizing paradigms WRT backup strategies, or size limitations of backup media.
Linux went to a 4KB page for almost all filesystems over a decade ago.
That's where defaults went. Not everyone uses defaults. I have far more filesystems using 1k blocks than those using larger. I use larger only where A/V media and iso files go if on 512b/s disks, which I have far more of than 4k/s disks. So, yes, space wasting! -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-25 a las 03:27 -0400, Felix Miata escribió:
A tiny sample, /usr/share/emoticons/Breeze, has 29,027 bytes in 35 files. Moving those 35 files off a filesystem with 1k blocksize frees 43,008 bytes, 42 1k blocks, a wasted space ratio of 32.5%. Moving them onto a filesystem with 4k blocksize consumes 36 4k blocks, 147,456 bytes, a wasted space ratio of 80.3%.
Not if you use reiserfs ;-) If we are going to use filesystems with larger pages, then we should use (design) filesystems like reiserfs. Or filesystems with a mix of block sizes. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYFNi0ACgkQja8UbcUWM1yiHwD+NjGKexP9k+ixlompidCs3KPb jQdeNfyZO0UCyEPv5ggA+wepy/iW3udntXYsquL+r9VDUubFBmgaxJzZRuHNG/Ss =fWKg -----END PGP SIGNATURE-----
On 09/25/2015 07:55 AM, Carlos E. R. wrote:
Not if you use reiserfs ;-)
+10
If we are going to use filesystems with larger pages, then we should use (design) filesystems like reiserfs. Or filesystems with a mix of block sizes.
Damn right! It's just a shame that the follow-up to ReiserFS is BtrFS. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-26 a las 12:39 -0400, Anton Aylward escribió:
On 09/25/2015 07:55 AM, Carlos E. R. wrote:
Not if you use reiserfs ;-)
+10
If we are going to use filesystems with larger pages, then we should use (design) filesystems like reiserfs. Or filesystems with a mix of block sizes.
Damn right!
It's just a shame that the follow-up to ReiserFS is BtrFS.
I don't know/remember if Btrfs does a similar trick than reiserfs, not needing a disk block to store a small file. Btrfs has another intersting feature, that compensates somewhat: it is, to my knowledge, the only Linux filesystem that supports data compression. ext2/3/4 has the flag since ever, but has not been implemented :-( Unfortunately, I still do not trust btrfs. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYG59cACgkQja8UbcUWM1wkLgEAgAlXHuuQ2qBIqo5ULbGuRPcp ct+OjpkvRbFWwvo8XxEBAJ/SktH8uAiP4MHJzKwNEwpMfTavtVJ+dRrOs93LNeGN =wyFL -----END PGP SIGNATURE-----
On 09/26/2015 02:45 PM, Carlos E. R. wrote:
Unfortunately, I still do not trust btrfs.
There are a few architectural assumptions in BtrFS. Whether they were made consciously and explicitly, I don't know. The idea of being able to support, spread a FS across multiple spindles in te various modes of striping & mirroring etc is not unique to btrFS, but there is a feeling of both "One Ring" and "Borg" to it. By comparison I can have different LVs with different strategies using LVM. I'd probably be happier ignoring the features of BtrFS, implement g it as if it were just another FS, and layering it on top of LVM. The "One Ring" aspect of BtrFS means that its efficiency and defectiveness come into play if you don't partition, but rather use subvolumes. its till the one file system, the subvolumes are just there to help with management They are not really partitions. This runs counter to previously established *NIX practices where partitions can be used, for example, to enforce security measures. The /tmp can, for example, be mounted: sticky, noexec, nosetuid. nodev. Preventing hard links across files systems from /sbin to /tmp or /home is another simple, well established precaution. These many not matter in a home setting, but then again BtrFS is being pitched at large production systems; its is the default in SLES12, for example. To my mind the issue isn't so much potential bugs in BtrFS; we can expect those in any 'in development' product, and BtrFS is walking the same path that KDE4.0 did :-( And of course it is by the very nature of the way it has been specified, an ambitious, large and complex piece of software. I've been running BtrFS as my ROOTFS for almost a year with negligible problems, but then again I haven't stressed it or exercised its capability. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Is there a way to query a already existing file system to see what size blocks it is using? For example, reiserFS can be fsck'd with blocksize -b | --block-size N N is block size in bytes. It may only be set to a power of 2 within the 512-8192 interval. No mention of default. What about other file system? -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward composed on 2015-09-26 15:41 (UTC-0400):
Is there a way to query a already existing file system to see what size blocks it is using? ... What about other file system?
tune2fs -l WFM. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 03:46 PM, Felix Miata wrote:
Anton Aylward composed on 2015-09-26 15:41 (UTC-0400):
Is there a way to query a already existing file system to see what size blocks it is using? ... What about other file system?
tune2fs -l WFM.
Thank you. But I have a ... confusing output # /usr/sbin/tune2fs -l /dev/disk/by-label/TMP | grep "Block size" Block size: 4096 # lsblk -f -t /dev/disk/by-label/TMP NAME FSTYPE LABEL UUID MOUNTPOINT NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME vgmain-vTMP /tmp vgmain-vTMP 0 512 0 512 512 1 128 128 0B That is PHY-SEC = 512 But this is a late model 1T drive. Is there some way I can verify 4K sectors? or conversely that it has 512b sectors? -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward composed on 2015-09-26 16:23 (UTC-0400):
tune2fs -l WFM.
Thank you. But I have a ... confusing output # /usr/sbin/tune2fs -l /dev/disk/by-label/TMP | grep "Block size" Block size: 4096
Filesystem blocksize, not sector size.
# lsblk -f -t /dev/disk/by-label/TMP NAME FSTYPE LABEL UUID MOUNTPOINT NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME vgmain-vTMP /tmp vgmain-vTMP 0 512 0 512 512 1 128 128 0B
That is PHY-SEC = 512
But this is a late model 1T drive. Is there some way I can verify 4K sectors?
or conversely that it has 512b sectors?
hdparm -I WFM. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 04:56 PM, Felix Miata wrote:
Anton Aylward composed on 2015-09-26 16:23 (UTC-0400):
tune2fs -l WFM.
Thank you. But I have a ... confusing output # /usr/sbin/tune2fs -l /dev/disk/by-label/TMP | grep "Block size" Block size: 4096
Filesystem blocksize, not sector size.
# lsblk -f -t /dev/disk/by-label/TMP NAME FSTYPE LABEL UUID MOUNTPOINT NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME vgmain-vTMP /tmp vgmain-vTMP 0 512 0 512 512 1 128 128 0B
That is PHY-SEC = 512
But this is a late model 1T drive. Is there some way I can verify 4K sectors?
or conversely that it has 512b sectors?
hdparm -I WFM.
Disappointing :-( ======================================== sudo /usr/sbin/hdparm -I /dev/sda /dev/sda: ATA device, with non-removable media Model Number: WDC WD10EALS-00Z8A0 Serial Number: WD-WCATR4740139 Firmware Revision: 05.01D05 Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6 Standards: Supported: 8 7 6 5 Likely used: 8 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 1953525168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 953869 MBytes device size with M = 1000*1000: 1000204 MBytes (1000 GB) cache/buffer size = unknown ======================================= -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward composed on 2015-09-26 19:36 (UTC-0400):
Felix Miata wrote:
Anton Aylward composed on 2015-09-26 16:23 (UTC-0400):
But this is a late model 1T drive. Is there some way I can verify 4K sectors?
or conversely that it has 512b sectors?
hdparm -I WFM.
Disappointing :-(
How, with your purchase?
========================================
sudo /usr/sbin/hdparm -I /dev/sda
/dev/sda:
... --
CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 1953525168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 953869 MBytes
# hdparm -I /dev/sdb: ATA device, with non-removable media Model Number: ST2000DM001-1ER164 ... CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 3907029168 Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes device size with M = 1024*1024: 1907729 MBytes -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 07:46 PM, Felix Miata wrote:
Anton Aylward composed on 2015-09-26 19:36 (UTC-0400):
Disappointing :-(
How, with your purchase?
I suppose so. I was under the impression that 1T and over were all 4K. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward composed on 2015-09-26 19:54 (UTC-0400):
Felix Miata wrote:
Anton Aylward composed on 2015-09-26 19:36 (UTC-0400):
Disappointing :-(
How, with your purchase?
I suppose so. I was under the impression that 1T and over were all 4K.
What is date of manufacture on WD10EALS-00Z8A0 WD-WCATR4740139? 4k didn't get fully in gear until early 2011. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 08:14 PM, Felix Miata wrote:
Anton Aylward composed on 2015-09-26 19:54 (UTC-0400):
Felix Miata wrote:
Anton Aylward composed on 2015-09-26 19:36 (UTC-0400):
Disappointing :-(
How, with your purchase?
I suppose so. I was under the impression that 1T and over were all 4K.
What is date of manufacture on WD10EALS-00Z8A0 WD-WCATR4740139? 4k didn't get fully in gear until early 2011.
I'd have to open the box and take it out of the carrier. Its a Dell chassis. HMMMMMM. The drive is about 18 months old - that is I bought it about about 18 months ago. How long it had been on the store shelf, in transit, ... anyone's guess. This was actually a replacement. The first one I bought from there failed. they grumbled until I brought them 6 pages of 5 columns of bad block listing, told them there were another 50+ pages to go ... And yes, I've done this with USB sticks and SD cards. The issue isn't to give the vendor grief, its that I want a reliable media for the FS. I /expect/ drives to have some bad sectors but I expect the firmware to handle it. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward composed on 2015-09-26 20:40 (UTC-0400):
Felix Miata wrote:
What is date of manufacture on WD10EALS-00Z8A0 WD-WCATR4740139? 4k didn't get fully in gear until early 2011.
I'd have to open the box and take it out of the carrier. Its a Dell chassis. HMMMMMM.
The drive is about 18 months old - that is I bought it about about 18 months ago. How long it had been on the store shelf, in transit, ... anyone's guess.
Here's mine: designed if not manufactured somewhere in the vicinity of 5.5 years ago at least. Here's why: http://community.wdc.com/t5/Desktop-Mobile-Drives/WD-Blue-1TB-wd10eals/td-p/... My (recert) WD10EADS-00M2B0 with 283 power on hours and Logical/Physical Sector size 512 bytes has a date stamp 2012-05-01. Maybe it speaks to yours being newer? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Sep 26, 2015 at 7:54 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 09/26/2015 07:46 PM, Felix Miata wrote:
Anton Aylward composed on 2015-09-26 19:36 (UTC-0400):
Disappointing :-(
How, with your purchase?
I suppose so. I was under the impression that 1T and over were all 4K.
Absolutely not. For the first couple years of 4KB, WD was the only manufacturing using it as far as I know. I also think some of the drives lie about being 512KB physical sectors even though they are really 4KB. I'm not sure how you find those. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Content-ID: <alpine.LSU.2.20.1509270343420.25408@zvanf-gvevgu.inyvabe> El 2015-09-26 a las 20:18 -0400, Greg Freemyer escribió:
I also think some of the drives lie about being 512KB physical sectors even though they are really 4KB. I'm not sure how you find those.
Easy. In hdparm output (copied from what felix pasted here): Logical Sector size: 512 bytes Physical Sector size: 4096 bytes - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYHSd4ACgkQja8UbcUWM1wffgD/WdxJGPDof9or5jnL1/SeBJ68 RVBGBrnRXE/yJ7LjJysBAIwgEwbe6y+1whxDmdS3tXobt5De9DUhO2bmIEznc+eU =3Zla -----END PGP SIGNATURE-----
On Sat, Sep 26, 2015 at 9:43 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Content-ID: <alpine.LSU.2.20.1509270343420.25408@zvanf-gvevgu.inyvabe>
El 2015-09-26 a las 20:18 -0400, Greg Freemyer escribió:
I also think some of the drives lie about being 512KB physical sectors even though they are really 4KB. I'm not sure how you find those.
Easy. In hdparm output (copied from what felix pasted here):
Logical Sector size: 512 bytes Physical Sector size: 4096 bytes
When I said they lied about the Physical Sector size, I meant I believe there are drives where hdparm will report 512 / 512 even though it is not true. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Content-ID: <alpine.LSU.2.20.1509270343420.25408@zvanf-gvevgu.inyvabe>
El 2015-09-26 a las 20:18 -0400, Greg Freemyer escribió:
I also think some of the drives lie about being 512KB physical sectors even though they are really 4KB. I'm not sure how you find those.
Easy. In hdparm output (copied from what felix pasted here):
Logical Sector size: 512 bytes Physical Sector size: 4096 bytes
Or using /sys: --cut from below to 'sector_sizes' #!/bin/bash -u if ((!$#)); then set $(cd /sys/block; echo sd*) fi printf "%8s %8s %8s\n" "device" "logical" "physical" for dev in "$@"; do read lbs </sys/block/$dev/queue/logical_block_size read pbs </sys/block/$dev/queue/physical_block_size printf "%8s %8s %8s\n" "$dev" "$lbs" "$pbs" done ---end above this line -- Make it executable:
/tmp/sector_sizes device logical physical sda 512 4096 sdb 512 4096 sdc 512 512 sdd 512 512
sda = 4TB SATAs in RAID sdb = 2TB SATAs in RAID sdc = 68GB 15K-SASsin RAID sdc = 250GB SSD's in RAID -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 26 Sep 2015, Anton Aylward wrote:
On 09/26/2015 02:45 PM, Carlos E. R. wrote:
Unfortunately, I still do not trust btrfs.
There are a few architectural assumptions in BtrFS. Whether they were made consciously and explicitly, I don't know.
The idea of being able to support, spread a FS across multiple spindles in te various modes of striping & mirroring etc is not unique to btrFS, but there is a feeling of both "One Ring" and "Borg" to it.
By comparison I can have different LVs with different strategies using LVM.
I'd probably be happier ignoring the features of BtrFS, implement g it as if it were just another FS, and layering it on top of LVM.
I must say I feel the same way about it as you do, and I know much less about it than you do. BtrFS includes all manner of features into the filesystem layer that should logically belong to the partition manager such as LVM. That is why I prefer to not use BtrFS because I'm afraid that if tools are going to center around supporting BtrFS you will get a form of lock-in the way Apple is trying to get with their iPad ebooks. Apple is selling or providing an eBook author software in which the premium content can only be produced for Apple iPads. So if the market leverage starts to extend into +50% numbers of both authors and consumers using the premium content, that will shift the remaining authors into feeling that they too have to use this premium software or be left out, which will cause all content to only have a premium version for the Apple platform in due time. Which is Apple's goal, more or less. I don't think they are succeeding because I haven't heard about it for a long time, but the same could happen with BtrFS: if many tools start to feature or support it exclusively, you get "premium" functionality that will only work if you use BtrFS. I think LVM can actually do most or everything of what BtrFS can do at the filesystem level. Not sure. But it seems that way. And I think it is more elegant and less complicated. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 08:40 PM, Xen wrote:
I must say I feel the same way about it as you do, and I know much less about it than you do.
BtrFS includes all manner of features into the filesystem layer that should logically belong to the partition manager such as LVM. That is why I prefer to not use BtrFS because I'm afraid that if tools are going to center around supporting BtrFS you will get a form of lock-in the way Apple is trying to get with their iPad ebooks.
What we need is for ReiserFS to be "supported" and reiser4 development to take off. Maybe it was Reiser himself, maybe it was a team; Russian made good chess players and they've seen some mathematical greats. But the ReiserFS proved a quick, well thought out development. OK, so it wasn't/isn't totally bug free. But it completed quickly compared to BtrFS. It's proven remarkably reliable. If you've read brooks, you'll will be familiar with what he termed "The Second System Effect". This abstract puts it very well: http://www.the-wabe.com/notebook/second-system.html The adjectives and hyperlinks) here are well chosen. http://www.catb.org/jargon/html/S/second-system-effect.html I particularly like "feature-laden monstrosity". That seems a good description of BtrFS. If its a SSD file system you're looking for then there are later, better thought out attempts that are /specific/ to SSD rather than a generalized FS with features for SSD. I've tried NilsFS with some success. Maybe Reiser4 is a "second system effect" thing. I don't know. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 26 Sep 2015, Anton Aylward wrote:
OK, so it wasn't/isn't totally bug free. But it completed quickly compared to BtrFS. It's proven remarkably reliable.
If you've read brooks, you'll will be familiar with what he termed "The Second System Effect".
I only read an overview of that book recently. Pretty old book.
This abstract puts it very well:
http://www.the-wabe.com/notebook/second-system.html
The adjectives and hyperlinks) here are well chosen. http://www.catb.org/jargon/html/S/second-system-effect.html I particularly like "feature-laden monstrosity". That seems a good description of BtrFS.
I'm not sure, I don't know enough of systems or system design (from experience, I guess) to be able to tell if this is really true. If you hold back at first, you will do too much after, this is pretty much a truth. People always end up at the other extreme first, instead of ending up in the middle. First they are too frugal. Then, inevitably, they spend too much. Only after a while do they find the middle. Call it the pendulum effect. For example, if people first refuse to help you, then after they start being way too helpful. And then they complain about you complaining about that. It takes a while for them to act normal (or natural). We can easily see that BTRFS is at the "way too much" extreme though. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 06:16 AM, Xen wrote:
On Sat, 26 Sep 2015, Anton Aylward wrote:
OK, so it wasn't/isn't totally bug free. But it completed quickly compared to BtrFS. It's proven remarkably reliable.
If you've read brooks, you'll will be familiar with what he termed "The Second System Effect".
I only read an overview of that book recently. Pretty old book.
You, and I don't mean you alone, I mean that in the generic, the collective, the French sense of "Vous", the "you plural", dear readers, would profit greatly from reading the boook. Slowly. And again. And again next year. make it an annual event. I have both the original and Anniversary editions with its four additional chapters. Both are heavily annotated.
This abstract puts it very well:
http://www.the-wabe.com/notebook/second-system.html
The adjectives and hyperlinks) here are well chosen. http://www.catb.org/jargon/html/S/second-system-effect.html I particularly like "feature-laden monstrosity". That seems a good description of BtrFS.
-- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/26/2015 08:40 PM, Xen wrote:
I think LVM can actually do most or everything of what BtrFS can do at the filesystem level. Not sure. But it seems that way. And I think it is more elegant and less complicated.
LVM is a storage manager. You can think of it as a substitute for partitioning. If you partition with the conventional tools like fdisk and family then the partitions are 'hard', they are at the BIOS level. While there are kludges like gparted, the concept is that you manage the hard partitions outside of the OS, Granted that most of us do use fsisk to partition, perhaps as BOOT, SWAP and now since grub/grub2 can handle it, put everything else in a LVM logical partition, which is 'software defined'. Oh, right, its a trend; its sort of a virtualization of disk space, like we have virtual networks . Many people feel its humbug and I can't blame them; its certainly 'deferred design'. But LVM is NOT, repeat NOT, a file system. Contrariwise BtrFS *is* a storage manager, a complete storage manager. All the way down from file to device layout management. You *can* put it on a disk with no partitioning and it manages the space and delivers the file system view. It reminds me of the old (but still in use) IBM CICS, where everything was within CICS, teleprocessing (what we think of as telnet, ssh), data transfer (what we think of as UUCP or FTP) database management and more. With LVM you can choose what file system goes into the logical partitions. There is no such analogue for BtrFS. One <strike>Ring</strike> filesystem to rule them all. And its assimilating like the Borg. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 26 Sep 2015, Anton Aylward wrote:
LVM is a storage manager. You can think of it as a substitute for partitioning. If you partition with the conventional tools like fdisk and family then the partitions are 'hard', they are at the BIOS level. While there are kludges like gparted, the concept is that you manage the hard partitions outside of the OS,
Granted that most of us do use fsisk to partition, perhaps as BOOT, SWAP and now since grub/grub2 can handle it, put everything else in a LVM logical partition, which is 'software defined'.
Okay but it is a bit the same as software raid (md...). And likewise it has great advantage if also detriments. You raid system is inherently less secure because you need to know all the tools by heart. At the same time you can mix raid levels at will. You can't do that with just hardware raid. (And then many hardware raid solutions depend on a Windows driver, making them software raid regardless X-(.
Oh, right, its a trend; its sort of a virtualization of disk space, like we have virtual networks . Many people feel its humbug and I can't blame them; its certainly 'deferred design'.
You have a point there. But it is the same also as with RAID, the system is more vulnerable. Like, making snapshots just for backups.... it is not my cup of tea really. Why am I using it? It feels like a sacrifice. I am sacrificing safety because now I am using a storage manager (partition manager) to create partitions that hardly exist for more than 30 minutes. It can be mitigated a bit with sufficient level of "protection" against erroneous commands. Just wrap your common tasks into scripts or functions that do just that and nothing else so you can't be making any mistakes. The more something can be done e.g. with BIOS "MENU" tools the better you are off. Failsafe, easy, user interface, not needing any workable system. But it's not like I really see an alternative at this point.
But LVM is NOT, repeat NOT, a file system.
Aye aye.
Contrariwise BtrFS *is* a storage manager, a complete storage manager. All the way down from file to device layout management. You *can* put it on a disk with no partitioning and it manages the space and delivers the file system view. It reminds me of the old (but still in use) IBM CICS, where everything was within CICS, teleprocessing (what we think of as telnet, ssh), data transfer (what we think of as UUCP or FTP) database management and more.
I read that yes. Seems rather very very scary ;-). To me at least.
With LVM you can choose what file system goes into the logical partitions. There is no such analogue for BtrFS. One <strike>Ring</strike> filesystem to rule them all. And its assimilating like the Borg.
I was on Kubuntu forums the other day fighting the venerable Steve Riley about this. He works for Amazon. Cloud. It couldn't get any of this through to him. He kept insisting that BtrFS would never put me in a position where I would be disenfranchised because I would end up in a world where BtrFS would be the all and everything. Basically he just didn't see that as a problem. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 12:36 +0200, Xen escribió:
hardware raid. (And then many hardware raid solutions depend on a Windows driver, making them software raid regardless X-(.
Those are not hardware raid at all. They are called "fake raid" for a reason. A hardware raid is transparent and needs no driver whatsoever. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYH25wACgkQja8UbcUWM1zjKgD/S9eyLD+YuCFgxdkIEHyx7u+J ojPKNe16jUiHetLmHDAA/3oSUXHjQ7WtVnTk/cxHybQ+DuW/HYn3cRBMl/EX6isf =0/Na -----END PGP SIGNATURE-----
On 09/27/2015 08:05 AM, Carlos E. R. wrote:
El 2015-09-27 a las 12:36 +0200, Xen escribió:
hardware raid. (And then many hardware raid solutions depend on a Windows driver, making them software raid regardless X-(.
Those are not hardware raid at all.
They are called "fake raid" for a reason.
A hardware raid is transparent and needs no driver whatsoever.
Right. As in it just looks like a SCSI or SATA to the box. I recall working on a IBM AIX system, a SP3 rack with a set of multiprocessors. There was a (single) (+backup) fibre feed to the other side of the wall. That was a room with 30 linear feet of 5 foot high cabinets containing racks each with 5 drives (I forget their size). Each rack represented a hardware RAID array. Or perhaps each cabinet. Or perhaps the whole room? But as far as the machine at whose console I sat it was one logical drive. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
"Carlos E. R." <robin.listas@telefonica.net> schreef:
Those are not hardware raid at all.
They are called "fake raid" for a reason.
A hardware raid is transparent and needs no driver whatsoever.
Hey, I didn't know that before I bought it. Well, I must say I primarily bought it to have 2 extra eSata ports at the back, so it doesn't matter in that way. Here is a ...cute thing. I defined a raid array in the thing (in its BIOS) but obviously that didn't do anything in Linux. There was no driver for it (what's the point, really). After, I just ignored that, and installed Linux/software raid. Then I tried to remove the 'array' again from the RAID card's bios. And I couldn't because it told me it would wipe my partition table if I tried. Great. Now the system is still having that "fake array" that doesn't do anything but luckily you can disable the BIOS of the thing completely, I believe. I also cannot remove the array unless both disks are attached. So the only way to remove the array from the card is to connect two empty disks or make sure I have backup of the partitions and how that doesn't fail completely ;-). Really sweet. Fake raid. But of course, that was my point, buster, but the card is hardware and it has a 'hardware BIOS' it's just that the RAID itself is not hardware. That's why I called it that, and you knew perfectly what I meant. cheers, B. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 15:46 +0200, Xen escribió:
"Carlos E. R." <> schreef:
Those are not hardware raid at all.
They are called "fake raid" for a reason.
A hardware raid is transparent and needs no driver whatsoever.
Here is a ...cute thing. I defined a raid array in the thing (in its BIOS) but obviously that didn't do anything in Linux. There was no driver for it (what's the point, really).
Well, there is. For some, at least. Linux is capable of using such fake raids, but as I have always shied away from them, I can't tell you how.
Really sweet. Fake raid. But of course, that was my point, buster, but the card is hardware and it has a 'hardware BIOS' it's just that the RAID itself is not hardware. That's why I called it that, and you knew perfectly what I meant.
Well, you see, we call them fake raid because the processing runs in the CPU. The heavy work is done by the main CPU, /wasting/ cycles, that could be used for something else. In software. On a (real) hardware raid, the entire processing is done by its chipset, in hardware. It is not simply "a driver" :-) - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYIB9EACgkQja8UbcUWM1zwVgD8C5ru7bcHI5f4J/vNRxQD0Jx/ 8UwePykdRZ6Wm/6fSuMBAIau10T27ti/lWqW+0awYcCTlZgDHOgt5aIrioHyaqW6 =3kAR -----END PGP SIGNATURE-----
On Sun, 27 Sep 2015, Carlos E. R. wrote:
Really sweet. Fake raid. But of course, that was my point, buster, but the card is hardware and it has a 'hardware BIOS' it's just that the RAID itself is not hardware. That's why I called it that, and you knew perfectly what I meant.
Well, you see, we call them fake raid because the processing runs in the CPU. The heavy work is done by the main CPU, /wasting/ cycles, that could be used for something else. In software. On a (real) hardware raid, the entire processing is done by its chipset, in hardware.
It is not simply "a driver" :-)
That is pretty much irrelevant. Software Linux RAID does the same and I don't think there is really a great performance penalty. The issue is with these cards that (a) it requires a driver for the OS to even see the RAID as a RAID and (b) that it requires a driver to pretty much do anything. The advantage is a BIOS screen but unfortunately that doesn't mean much (unless you can still use it to rebuild arrays, but even then)... This dependency on a driver (that may be faulty, or whatever) might even mean you cannot setup the raid before the OS is installed. Which is rather, just troublesome. So really, it is the dependency issue and the fact that the drives do not appear to the OS as a single logical thing, that is the problem. kk -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 11:44 AM, Xen wrote:
It is not simply "a driver" :-)
That is pretty much irrelevant. Software Linux RAID does the same and I don't think there is really a great performance penalty. The issue is with these cards that (a) it requires a driver for the OS to even see the RAID as a RAID and (b) that it requires a driver to pretty much do anything.
There get to be a point where you go off on a misunderstanding. The whole point of hardware RAID is that the OS doesn't see it as RAID. It looks like just one very large, very reliable disk. As I mentioned earlier, I've used the IBM configuration and I've also used RAID "boxes" from other vendors. In all cases it looks like a single BIG disk, attached via SCSI or Fibre, depending on the equipment in use. There may be, external to the computer *using* the RAID array, a management system, a management console or port. But unless you take extraordinary measure the main machine does not see that. You statement that it require a driver to see the RAID as RAID is irrelevant when it comes to hardware RAID. That's the whole point - the system is *NOT* supposed to be managing it! As for "requires a driver to pretty much do anything", well of course you'll need the SCSI or the SATA or the fibre driver. Big deal. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 11:44 AM, Xen wrote:
It is not simply "a driver" :-)
That is pretty much irrelevant. Software Linux RAID does the same and I don't think there is really a great performance penalty. The issue is with these cards that (a) it requires a driver for the OS to even see the RAID as a RAID and (b) that it requires a driver to pretty much do anything.
There get to be a point where you go off on a misunderstanding.
No, once again you misunderstand ME. I mean exactly what you mean. "Seeing the RAID as a RAID" means seeing it as a transparent logical thing. Or not seeing it at all, but seeing one logical disk. That is the benefit of a hardware RAID card (or any raid card, as it should be) -- not really the CPU processing benefit. What I meant was that the "fake cards" require a driver to see the array of disks as one logical thing, without the driver the OS will just see it as independent logical disks, or physical disks. So "not seeing the RAID as a RAID" means seeing individual disks without array organisation.
You statement that it require a driver to see the RAID as RAID is irrelevant when it comes to hardware RAID. That's the whole point - the system is *NOT* supposed to be managing it!
Duh. That was my point exactly. Forgot your morning coffee? :P. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-28 a las 12:48 +0200, Xen escribió:
On Sun, 27 Sep 2015, Anton Aylward wrote:
What I meant was that the "fake cards" require a driver to see the array of disks as one logical thing, without the driver the OS will just see it as independent logical disks, or physical disks.
So "not seeing the RAID as a RAID" means seeing individual disks without array organisation.
Not exactly. A driver is some software to interface with a certain hardware that does something. Like telling the video driver: paint me a square of this size, color, and position, and the hardware then goes and does it, on its own, once the driver writes the directions. The fake raid hardware does nothing. It is just the same as the separate disks with their interfaces. The CPU does it all, in code. Not driver. Don't confuse what Windows calls "driver" :-) - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJNQ8ACgkQja8UbcUWM1x3gwD+NvMlgvpS6hWq/AyxXB4qAQU0 RS+xDgpeNysUMcRa4IIA/3NyfdXOC9gEiaZ9oJPuq6QYh4aA1WAFTq+tFwesDux7 =Mwgy -----END PGP SIGNATURE-----
On Mon, 28 Sep 2015, Carlos E. R. wrote:
El 2015-09-28 a las 12:48 +0200, Xen escribió:
Not exactly.
A driver is some software to interface with a certain hardware that does something. Like telling the video driver: paint me a square of this size, color, and position, and the hardware then goes and does it, on its own, once the driver writes the directions.
The fake raid hardware does nothing. It is just the same as the separate disks with their interfaces. The CPU does it all, in code. Not driver.
Don't confuse what Windows calls "driver" :-)
I'm not sure why you keep making this distinction as if to drive a point? I have already accepted that the hardware does not processing nor does it offer a different model of its connected harddrives to the OS. However what we call the driver reads the "RAID configuration" you have made in the card's BIOS and then supplies a unified 'RAID' model of these drives (that the card doesn't do by itself) to the OS. Also "The CPU does it all, in code. Not driver" is a pretty meaningless statement since the CPU is executing driver code. What else do you think it is executing? A CPU is not a software entity. I think you can stop this now. It is the driver that is providing raid capability to the OS. The same as in Linux, pretty much, with dmraid. Regards, B.
On 09/28/2015 06:48 AM, Xen wrote:
What I meant was that the "fake cards" require a driver to see the array of disks as one logical thing, without the driver the OS will just see it as independent logical disks, or physical disks.
You should have said that at the beginning. Many of us have experience with "real" hardware RAID! Not these kludges. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 28 Sep 2015, Anton Aylward wrote:
You should have said that at the beginning. Many of us have experience with "real" hardware RAID! Not these kludges.
Still doesn't mean you have to assume that I might have it wrong or that I will probably or even most certainly have it wrong. :-/. "shrugs". -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/28/2015 12:32 PM, Xen wrote:
On Mon, 28 Sep 2015, Anton Aylward wrote:
You should have said that at the beginning. Many of us have experience with "real" hardware RAID! Not these kludges.
Still doesn't mean you have to assume that I might have it wrong or that I will probably or even most certainly have it wrong.
It has been demonstrated here on this list many times, in real life many times even to the extent of causing strife and wars, that if you do not adequately say what you mean unambiguously you can be be misinterpreted. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 28 Sep 2015, Anton Aylward wrote:
On 09/28/2015 12:32 PM, Xen wrote:
Still doesn't mean you have to assume that I might have it wrong or that I will probably or even most certainly have it wrong.
It has been demonstrated here on this list many times, in real life many times even to the extent of causing strife and wars, that if you do not adequately say what you mean unambiguously you can be be misinterpreted.
Sure, and I'm not saying that I have never done the same, but it is still a case of assuming the worst (usually to make a point about another persons ... let's say inadequacy ;-)). I mean, I'm probably the worst person in the world for doing that myself ;-). I mean, blame me. Shoot me. Expose me. But I'm just saying that you don't have to do the same. You're better than me :P. Regards, and "love you" :P (I can't spend attention on it now, some guy is talking to me full time now and I can't shut him up as I am writing emails :P). Regards. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 17:44 +0200, Xen escribió:
On Sun, 27 Sep 2015, Carlos E. R. wrote:
Really sweet. Fake raid. But of course, that was my point, buster, but the card is hardware and it has a 'hardware BIOS' it's just that the RAID itself is not hardware. That's why I called it that, and you knew perfectly what I meant.
Well, you see, we call them fake raid because the processing runs in the CPU. The heavy work is done by the main CPU, /wasting/ cycles, that could be used for something else. In software. On a (real) hardware raid, the entire processing is done by its chipset, in hardware.
It is not simply "a driver" :-)
That is pretty much irrelevant. Software Linux RAID does the same and I don't think there is really a great performance penalty.
There is a large penalty compared with (real) hardware raid.
The issue is with these cards that (a) it requires a driver for the OS to even see the RAID as a RAID and (b) that it requires a driver to pretty much do anything.
What you need is not a driver, there is almost no such thing in Linux. You just need that the kernel guys implemented support for your particular brand of fake raid in the kernel. If it is there, chances are your /raid/ will just work out of the box. But then, I have always refused to use fake raid, so I would not know (nor care) about the details. The advantage, its use case, is that in double boot machines, the Windows side may already be using it, and thus, you need the Linux side to support it too. However, there is no posibility of Windows being able to use a Linux software raid. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJN84ACgkQja8UbcUWM1xC4gD/ZMENhAicI2ebUtsA+ieQIpSz fbdTjd06Rhy7FcgTmsoBAKE5I3P5Z9og3Dii5t2NtkktyvKjDLdkJOA+J4vACbTo =4VyZ -----END PGP SIGNATURE-----
On Mon, 28 Sep 2015, Carlos E. R. wrote:
It is not simply "a driver" :-)
That is pretty much irrelevant. Software Linux RAID does the same and I don't think there is really a great performance penalty.
There is a large penalty compared with (real) hardware raid.
Quantify it then. And explain why it is relevant. Even mainstream NAS devices with even dozens of disks (I believe) use software RAID.
The issue is with these cards that (a) it requires a driver for the OS to even see the RAID as a RAID and (b) that it requires a driver to pretty much do anything.
What you need is not a driver, there is almost no such thing in Linux. You just need that the kernel guys implemented support for your particular brand of fake raid in the kernel. If it is there, chances are your /raid/ will just work out of the box.
Then call it a kernel module for all I care. I hope you have made your point. And I don't see what the deal is in differentiating "driver" and "kernel module". We call these things drivers. Get used to it.
The advantage, its use case, is that in double boot machines, the Windows side may already be using it, and thus, you need the Linux side to support it too. However, there is no posibility of Windows being able to use a Linux software raid.
What advantage is that? You mean that theoretically it would be possible to get a form of software raid that works in both linux and windows? That's not much of an advantage although it is much more than nothing. I bought the card for its eSATA ports though (also, or mostly). I needed more than 4 sata ports in my machine. Although I can't say I was not unpleasantly surprised to learn that it was indeed a form of software raid. But since the thing is not dual boot and since it runs Linux and since the Debian installer has very good raid support, getting raid setup was realy very easy. However, managing it is less so. :(. Regards, B. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 28/09/2015 16:36, Xen a écrit :
And explain why it is relevant. Even mainstream NAS devices with even dozens of disks (I believe) use software RAID.
As far as I understand, any raid is software driven (any computer hardware is software driven :-). All is to know where the software is run. * Hardware raid is a box (nas) or a card (scsi) with on board software and processor(s) I don't have to care with. I had one, very handy! Usually pretty expensive and you have value for the money. * fake raid are build in your own computer. Depending of the make, part of the software is in rom (possibly in the bios), part in the OS that may have to be windows (and what version?) to use the maker's driver. Linux may or may not see this, probably not. * soft raid in linux is a very well implemented software, kernel module + user space commands. Of course it uses processor power, but who needs really all the time our processor power? I guess the penalty is little. But I have no idea of how windows manage this, probably not that well. That said, I have no continuous availability necessity in my use, so I find raid a waste or data space and no more use it... good discussion, anyway, but don't seems to show any new development about raid thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
You're welcome. On Mon, 28 Sep 2015, jdd wrote:
As far as I understand, any raid is software driven (any computer hardware is software driven :-). All is to know where the software is run.
Hmm, could be, good point :). So the only difference is an extra processing chip.
* fake raid are build in your own computer. Depending of the make, part of the software is in rom (possibly in the bios), part in the OS that may have to be windows (and what version?) to use the maker's driver. Linux may or may not see this, probably not.
This accurately describes my situation, yes.
* soft raid in linux is a very well implemented software, kernel module + user space commands. Of course it uses processor power, but who needs really all the time our processor power? I guess the penalty is little. But I have no idea of how windows manage this, probably not that well.
That is why I was questioning the penalty. Perhaps for very high performance systems with constant load then perhaps obviously the advantages matter. But for my use, I can ensure you that to date at least I would have gained absolutely zero benefit from a real hardware raid solution in terms of performance. So that means a "real raid" where the mapping of disks/sectors to the logical entity was done in ROM/raid chip.
That said, I have no continuous availability necessity in my use, so I find raid a waste or data space and no more use it...
I hate SSD. I also don't like 3.5" HDD much in my system. Call me crazy, but I use 2x 2.5" HDD in Raid 0 (stripe). Stripe has little performance penalty (for seeks) and you don't lose any capacity because you keep the double size. But with software raid it gets better because you can "interleave" your partitions. You can put stripe raid where it matters and put mirror raid when you want safety and a mirror raid still has higher read or access speeds than non-raid, I believe. Probably, perhaps depending on the implementation, you get a nice increase in random read times. It is probably cheaper than SSD (you can get 2x 1TB disk for instance) you have a load of storage and it is kinda cute to do it. You have higher read/write throughput when you want that, and you don't really have to care much about "continuous availability" in that case, in practice probably you would run a higher risk if you use stripe partitions. But I have a debian system, I just don't know how to maintain it, that will boot fine from either disk it will just have a degraded array and the stripe sets will be unavailable. If one disk is missing. You can put LVM in the dmx if you want (dm0, dm1, etc.) and then you have the exact same interface as if you were using regular non-raid LVM. I consider the ability to 'interleave' raid partitions to be very nice and interesting and a great help and power thing. Not sure how to combine it in a real system though, a normal system. You'd have to put /(root) on mirror if you wanted that. But say you do video editing or compiling, or whatever, just put it on stripe! :p. Anyway.
good discussion, anyway, but don't seems to show any new development about raid
Maybe I just showed you something :p. It's just my own... amateur testing, so to speak. ;-). Bye. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, Sep 28, 2015 at 10:52 AM, jdd <jdd@dodin.org> wrote:
Le 28/09/2015 16:36, Xen a écrit :
And explain why it is relevant. Even mainstream NAS devices with even dozens of disks (I believe) use software RAID.
As far as I understand, any raid is software driven (any computer hardware is software driven :-). All is to know where the software is run.
* Hardware raid is a box (nas) or a card (scsi) with on board software and processor(s) I don't have to care with. I had one, very handy! Usually pretty expensive and you have value for the money.
* fake raid are build in your own computer. Depending of the make, part of the software is in rom (possibly in the bios), part in the OS that may have to be windows (and what version?) to use the maker's driver. Linux may or may not see this, probably not.
* soft raid in linux is a very well implemented software, kernel module + user space commands. Of course it uses processor power, but who needs really all the time our processor power? I guess the penalty is little. But I have no idea of how windows manage this, probably not that well.
That said, I have no continuous availability necessity in my use, so I find raid a waste or data space and no more use it...
good discussion, anyway, but don't seems to show any new development about raid
thanks jdd
I went to the local Fry's over the weekend. They had a 2TB SSD from Samsung for about $900, or 1TB SSDs for $500. They both advertised 500 MB/sec read/write speed and over 100,000 IOPS. The "value" of a hardware raid controller may soon disappear in the 2TB or smaller market. ie. The last hardware raid controller I bought was $1,500 IIRC. fyi: The software overhead of a simple mirror is reasonably small. For $2K you can get 2 of those 2TB SSDs and setup an outrageously fast 2TB mirror. A couple years ago you would have had to spend significantly more to get the same performance/reliability. I'm fully aware hardware Raid5 / Raid6 will survive in the multi-TB market for some time to come. I'm talking exclusively about the 2TB or smaller market space. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-28 a las 16:36 +0200, Xen escribió:
On Mon, 28 Sep 2015, Carlos E. R. wrote:
There is a large penalty compared with (real) hardware raid.
Quantify it then.
Large, depending on the ussage pattern. I don't own such hardware, it is expensive, so I can't run a benchmark for you.
And explain why it is relevant. Even mainstream NAS devices with even dozens of disks (I believe) use software RAID.
But they don't do anything else but file server. All the CPU is available for the task. It is not the same when the machine is busy doing other tasks: then writing is slower.
The advantage, its use case, is that in double boot machines, the Windows side may already be using it, and thus, you need the Linux side to support it too. However, there is no posibility of Windows being able to use a Linux software raid.
What advantage is that? You mean that theoretically it would be possible to get a form of software raid that works in both linux and windows?
Of course. I'm saying that "fake raid" does. Not theory. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJWZwACgkQja8UbcUWM1zajQD/YkqP+BHvT60H62Gkaib+opHp vy5zR5f3K8znb4DAO2EA/AzcatDXTXnk9enbECYKkqcwSdvoINucA5TagLNQx2gh =d9Pb -----END PGP SIGNATURE-----
On Mon, 28 Sep 2015, Carlos E. R. wrote:
Quantify it then.
Large, depending on the ussage pattern.
That is not quantifying it ;-).
But they don't do anything else but file server. All the CPU is available for the task. It is not the same when the machine is busy doing other tasks: then writing is slower.
True in general they may not be high performance machines. Can be used for video transcoding though. Or VPN. Some do encryption but they do it in hardware. Not sure.
What advantage is that? You mean that theoretically it would be possible to get a form of software raid that works in both linux and windows?
Of course. I'm saying that "fake raid" does. Not theory.
Yeah, except that my very common chipset is not even supported in Linux ;-). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/28/2015 11:22 AM, Xen wrote:
Yeah, except that my very common chipset is not even supported in Linux ;-).
There;s nothing to stop you using this as a SAN. I've me people who have implemented their network SAN using Windows simply because they are in your situation; the crappy "fake raid' board won't work under UNIX, so they get a "cheap" box with lots of disk slots, a cheap board, run Windows (many seem to still use XP) on it and run nothing but the SAN. They've converted a problem into a solution. Regular readers will now that I make great use of "junk'. I speak of The Closet of Anxieties, the discard box of outdated or broken equipment. This CPU was one a neighbour threw out when he upgraded, this screen was thrown out when it stopped working; I replaced a couple of bad capacitors in the PSU and LO!. I have a bunch of SFF laptops that ran XP but wouldn't upgrade so were thrown out, but run Linux, even 13.1, just fine. One is acting as a MariaDB server under my desk. A few years ago I got a call from a friend; a legal firm was doing a merger and hence much of the support staff was being made redundant, so their computers were being 'junked' for about 3 cents on the dollar. I walked into a office and they were lined up on the floor and looks like headstones in a graveyard. I took a couple of the servers. Good machines for the time. There are lots of opportunities like this. If you're part of the education system you'll find some school boards doing purge-and-update. None of this 'redundant' equipment is serious stuff for a corporate setting; there's no support. The best that could be said is that if you buy on mass you can treat it like toilet paper or the TV in the SCTV opening sequence. one goes bust, throw it out and pull a replacement out of the closet. But if I was in Xen's position with a "fake raid" board that ran only under Windows, THAT I ABSOLUTELY HAD TO USE then I'd put it in one of the spare boxes from the Closet and set up a SAN. But what the heck, I've got enough spare boxes that I could do what Carlos suggests and devote one to software raid under Linux if I wanted. As Carlos says, the SAN/RAID isn't doing anything else but RAID processing and being SAN it isn't on my workstation so isn't dragging me down. It strikes me as an interesting learning project, but I can't imagine what I'd do with all that space. If I were in a corporate setting I could budget for/buy a commercial SAN box and perhaps it would have RAID internally for reliability. Corporate economics and accounting is very different from home/hobby. its pretty clear from what Xen writes that he's not taking that into account. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Content-ID: <alpine.LSU.2.20.1509281953340.4252@zvanf-gvevgu.inyvabe> El 2015-09-28 a las 17:22 +0200, Xen escribió:
On Mon, 28 Sep 2015, Carlos E. R. wrote:
Quantify it then.
Large, depending on the ussage pattern.
That is not quantifying it ;-).
Well, Greg mentioned a price tag of $1500 for such a thing... I'm sure you can find benchmarks around. Just consider that software raid typically is slower than the same disks in singles, visibly so; thus there is margin to gain.
What advantage is that? You mean that theoretically it would be possible to get a form of software raid that works in both linux and windows?
Of course. I'm saying that "fake raid" does. Not theory.
Yeah, except that my very common chipset is not even supported in Linux ;-).
Well, that's one of the disadvantages. Another is that if the controller or motherboard dies, it is very possible that you will not be able to recover the array on another controller. Instead, you need to restore from backup on a newly created array. This also happens with real hardware raid, of course. Linux software raid is very flexible. If you do want to access that fake raid as such (I understand you don't) you would have to create a new thread with an appropriate tittle so that people that know about that particular chipset can advise you. It is called "dmraid", by the way. If you google "fake raid on linux" you find many hits. For instance, a good one for ubuntu: https://help.ubuntu.com/community/FakeRaidHowto Searching for "fake raid" or "dmraid" on the opensuse wiki search facility finds nothing, although it does exist, according to google: https://en.opensuse.org/SDB:DMRAID «This article describes how to reuse disks after removing them from a DMRAID software RAID array.» Not much use then, I'm afraid. :-( Apparently, "dmraid -l" lists what it supports. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJgyEACgkQja8UbcUWM1zxrAD+P65Ndz3QNuMWPrIFH6MboW0X BSgArtxzVQaB26czdckA/0gkyP25nY5VF+F8HIf3qwqPOMVTrLbX0Uu/6JbOUFlO =F/+v -----END PGP SIGNATURE-----
On Mon, 28 Sep 2015, Carlos E. R. wrote:
That is not quantifying it ;-).
Well, Greg mentioned a price tag of $1500 for such a thing... I'm sure you can find benchmarks around. Just consider that software raid typically is slower than the same disks in singles, visibly so; thus there is margin to gain.
I have no interest in hardware performance gains, apparently you do. You said there was a large difference. So, if it is so important to you, I ask you to quantify it. Otherwise it is meaningless. I am not even in the position to make use of such "gains" that I do not know about, nor am I interested in it really in the slightest at this point. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Content-ID: <alpine.LSU.2.20.1509282052540.4252@zvanf-gvevgu.inyvabe> El 2015-09-28 a las 20:21 +0200, Xen escribió:
On Mon, 28 Sep 2015, Carlos E. R. wrote:
That is not quantifying it ;-).
Well, Greg mentioned a price tag of $1500 for such a thing... I'm sure you can find benchmarks around. Just consider that software raid typically is slower than the same disks in singles, visibly so; thus there is margin to gain.
I have no interest in hardware performance gains, apparently you do. You said there was a large difference. So, if it is so important to you, I ask you to quantify it. Otherwise it is meaningless. I am not even in the position to make use of such "gains" that I do not know about, nor am I interested in it really in the slightest at this point.
We were comparing capabilities and features, and in that context I say that there is a substantial perfomance difference. I don't have figures or links at hand, but I have seen them. We are not talking about what you or I need :-) I can't pay it, anyway... - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJjRcACgkQja8UbcUWM1xsxAEAoJmcnghZcgOqYKVMFdKiJx8o t+WoOihuxyt/99rJcNIA/1S2lprcjL97cbqw1cxRZAtygM9HphZLvvplvwkgVWz+ =kH8N -----END PGP SIGNATURE-----
On Mon, 28 Sep 2015, Carlos E. R. wrote:
We were comparing capabilities and features, and in that context I say that there is a substantial perfomance difference. I don't have figures or links at hand, but I have seen them.
We are not talking about what you or I need :-) I can't pay it, anyway...
Greg just said that the CPU time required for a simple 2-disk mirror is only minimal. Such unquantified statements do just not mean anything. If you're talking about a 6-disk raid 5 perhaps it is going to matter, you know? I just don't know. But I'm not going to look up data because just you are not doing it :p. You are the one making the statements, then back it up. I can't pay it either. I did have a look at real hardware cards in the past, but it was rather outrageously expesive, from what I remember. Software Linux is just perfect except for the software that isn't there :p. mdadm or whatever the tool is called is not *exactly* the most user friendly thing is existence. The only thing I have remembered thus far is to do cat /proc/mdraid or something to see some rather unintelligeble output of the activated raid things. the debian setup tool was nice but it ended there. i have since managed to add another array manually but I have long since forgotten how i did it. I mean another array based on identical partitions that I created. I just wanted my boot partition to be in mirror also :). Because that made it easier to mount the thing at startup. Doesn't work if you have multiple partitions. With mirror raid, it is just one partitions (logically). I just wish.... there was a tooll....... that would actually make life a lot easier. It probably exists, but it is like all those things: you don't come across it by itself. You always have to really start searching explicitly and often lenghtily. It is just not easy to find something you can use, and everything you try also requires a lengthy investment of time. Trying software is not easy. Not also when they are command line tools or whatever, the usability is often "far to seek" as we say in D. I have seriously not come across more than 1 (ONE) (ONE!!!) ncurses application that I like. Guess what it is. Perhaps you already know it. What or which is the one ncurses menu-style blue-interface-with-white-lines application on Linux that is just awesome? Apart from make menuconfig I guess. So I can just say that at present, I am at a loss of how to manage my raid. I barely know what to do in case of failure. Yes, SEARCH THE WEB. Great, awesome. We are all awesome. Bye. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-28 a las 21:42 +0200, Xen escribió:
On Mon, 28 Sep 2015, Carlos E. R. wrote:
We were comparing capabilities and features, and in that context I say that there is a substantial perfomance difference. I don't have figures or links at hand, but I have seen them.
We are not talking about what you or I need :-) I can't pay it, anyway...
Greg just said that the CPU time required for a simple 2-disk mirror is only minimal. Such unquantified statements do just not mean anything. If you're talking about a 6-disk raid 5 perhaps it is going to matter, you know? I just don't know. But I'm not going to look up data because just you are not doing it :p. You are the one making the statements, then back it up.
No, it doesn't work that way, I'm not going to prove anything. I investigated the issue a decade or two ago, I was satisfied, and that's it. If you want hard facts, you will have to search them yourself. I'm not going to dig out magazines on the basement and scan them ;-) But if you google "hardware raid versus software raid perfomance" you find hits. The first one is a paper from adaptec. Another one is a benchmark.
Guess what it is. Perhaps you already know it. What or which is the one ncurses menu-style blue-interface-with-white-lines application on Linux that is just awesome?
mc, pine, lynx, yast...
So I can just say that at present, I am at a loss of how to manage my raid. I barely know what to do in case of failure. Yes, SEARCH THE WEB.
If it is Linux software raid, there are some howtos at TLDP. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJ4u8ACgkQja8UbcUWM1w8XQD/Qr6gvlqyLyRooRvdeWqSOHoW Y8sd20GLXp2nO8Yi5oYBAIyXFK/Y76hBAHWyM6Kw01eUflT7lOV0QyElizGWw8Z8 =bVCX -----END PGP SIGNATURE-----
On Tue, 29 Sep 2015, Carlos E. R. wrote:
Greg just said that the CPU time required for a simple 2-disk mirror is only minimal. Such unquantified statements do just not mean anything. If you're talking about a 6-disk raid 5 perhaps it is going to matter, you know? I just don't know. But I'm not going to look up data because just you are not doing it :p. You are the one making the statements, then back it up.
No, it doesn't work that way, I'm not going to prove anything. I investigated the issue a decade or two ago, I was satisfied, and that's it. If you want hard facts, you will have to search them yourself. I'm not going to dig out magazines on the basement and scan them ;-)
But if you google "hardware raid versus software raid perfomance" you find hits. The first one is a paper from adaptec. Another one is a benchmark.
It does work that way. If you had done that, you would actually have disseminated information. This way, you disseminate nothing except misinformation. Without numbers, such claims are worthless, and from Greg's statement it is clear that "large" is not even true. I am still not interested in it because I know for a fact that "large" is not true. And it would also depend on the RAID type, etc.
Guess what it is. Perhaps you already know it. What or which is the one ncurses menu-style blue-interface-with-white-lines application on Linux that is just awesome?
mc, pine, lynx, yast...
Never liked MC though. Yast is slow. I use Alpine daily. But it's just grey on black; perhaps that is important. Lynx is important at times (or links, w3c, elinks) but not what I consider awesome. No I was actually mentioning.... "iptraf" ;D.
So I can just say that at present, I am at a loss of how to manage my raid. I barely know what to do in case of failure. Yes, SEARCH THE WEB.
If it is Linux software raid, there are some howtos at TLDP.
Which has been deprecated and replaced by a wiki that is pointless to read as a howto ;-). But perhaps still relevant. It seems raidtools was superseded by mdadm but it seems also very much that raidtools are much easier to use. Well, I don't have time now, but at least I learned something. Now I don't know if raidtools will still work on my debian host but I guess it depends on /etc/raidtab file? It is weird, there is only /etc/mdadm/mdadm.conf or something and it was created by the installer and still every tool complains that it is faulty or corrupt. And there is nothing wrong with it that I can see but it still claims I have no arrays defined, so it just starts all arrays from the superblock at boot, supposedly. Thanks brotha. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/09/2015 10:43, Xen a écrit :
It does work that way. If you had done that, you would actually have disseminated information. This way, you disseminate nothing except
Hello, xen :-) you seems to be able to write pretty well, and seeing how much you write here, have some time to do so. I would like to see you summarize all this stuff anywhere like on the wiki, any blog, I even can give you some room in the unexpected case that you don't have any. Mailing lists archives are not good as documentation thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 06:36 AM, Xen wrote:
It can be mitigated a bit with sufficient level of "protection" against erroneous commands. Just wrap your common tasks into scripts or functions that do just that and nothing else so you can't be making any mistakes.
Ha Ha ha. There's a saying in software 'engineering" (I beg Steve McConnell's pardon for using that term at all) that any problem can be solved by another level of indirection. Its variously attributed to David Wheeler (the inventor of the subroutine call instruction) and to Butler Lampson. LVM is a layer of indirection Software RAID is a layer of indirection. LVM on RAID is more manageable since it presents to the computer/OS what looks like a regular partition scheme. Each of these is doing just one thing, in contrast with BtrFS which is trying to subsume everything. "One Ring"/Borg. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 06:36 AM, Xen wrote:
I was on Kubuntu forums the other day fighting the venerable Steve Riley about this. He works for Amazon. Cloud. It couldn't get any of this through to him. He kept insisting that BtrFS would never put me in a position where I would be disenfranchised because I would end up in a world where BtrFS would be the all and everything.
Basically he just didn't see that as a problem.
Perhaps some of us see a "If This Goes On ..." and see what ends up as political necessities, where the reality runs away. "Oh no, we never intended the law to be used for THAT!". Can you say "Emergent Properties"? Of course you can! -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I On September 25, 2015 3:27:20 AM EDT, Felix Miata <mrmazda@earthlink.net> wrote:
Greg Freemyer composed on 2015-09-23 15:38 (UTC-0400):
David C. Rankin wrote:
Felix Miata wrote:
This isn't on a filesystem with a small blocksize, is it?
Nope, this was a fresh install on one of the space-wasting 4k new-fangled 7200 rpm platters :-)
Uhh....Space Wasting?
On a TW/KDE installation, search for *.png in /usr/ tree produced 15,766 hits.
A tiny sample, /usr/share/emoticons/Breeze, has 29,027 bytes in 35 files. Moving those 35 files off a filesystem with 1k blocksize frees 43,008 bytes, 42 1k blocks, a wasted space ratio of 32.5%. Moving them onto a filesystem with 4k blocksize consumes 36 4k blocks, 147,456 bytes, a wasted space ratio of 80.3%.
On same installation, /usr/share/kf5/locale/countries has 241 directories and 655 bytes in 20 files. Each of those 241 directories contains two small files, about half of which are less than 512 bytes, and few of which are more than 1024 bytes. du -s says this group 743 blocks on a 1k block filesystem, so 246,784 bytes. Moving them off the 1k block filesystem freed 760,832 bytes, onto a 4k block filesystem, using 734 4k blocks, 3,006,464 bytes, a 3.95:1 ratio between consumption on the two filesystems.
IIRC, last I compared freespace rsyncing a root partition on a 4k block 4.8G filesystem onto a 1k block 4.8G filesystem saved somewhere in the neighborhood of 15% of freespace difference between the two.
Based on these observations, it's easy to see at least potential loss of space using 4k instead of smaller blocksize is a lot bigger than inconsequential.
It's a seriously good thing storage densities continue to rise, and unit cost continues downward, but they don't necessarily interplay nicely with smaller sizing paradigms WRT backup strategies, or size limitations of backup media.
Linux went to a 4KB page for almost all filesystems over a decade ago.
That's where defaults went. Not everyone uses defaults. I have far more filesystems using 1k blocks than those using larger. I use larger only where A/V media and iso files go if on 512b/s disks, which I have far more of than 4k/s disks.
So, yes, space wasting!
Just one comment "However, by modifying the length of the data field through the implementation of Advanced Format using 4096-byte sectors, hard disk drive manufacturers could increase the efficiency of the data surface area by five to thirteen percent while increasing the strength of the ECC." So, sticking with 512 byte sectors effectively wastes 5 to 13% of the potential storage on the platter due to inefficient headers/footers. You should buy what works best for you, but I suspect over 95% of users go with the default 4kb filesystem page. Greg -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
greg.freemyer@gmail.com composed on 2015-09-26 12:29 (UTC-0400):
Felix Miata wrote:
So, yes, space wasting!
Just one comment
"However, by modifying the length of the data field through the implementation of Advanced Format using 4096-byte sectors, hard disk drive manufacturers could increase the efficiency of the data surface area by five to thirteen percent while increasing the strength of the ECC."
https://en.wikipedia.org/wiki/Disk_sector says that.
So, sticking with 512 byte sectors effectively wastes 5 to 13% of the potential storage on the platter due to inefficient headers/footers.
I knew about that from my reading of http://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-m... long ago.
You should buy what works best for you, but I suspect over 95% of users go with the default 4kb filesystem page.
Wasting potential storage area on a platter is a non-issue from a user perspective. Probably few users who are not avid collectors of images, music and videos will ever use as much as half of a .5TB storage device, the smallest capacity 3.5" form factor rotating rust any of the manufacturers have made for some time. RR capacities marketed are arbitrary. Wasting 5%-13% might matter if it meant getting 6.2T or 6.3T instead of 6.1T for the same money. For someone replacing a dead 250G it won't matter one iota. OTOH, the higher the data density, the smaller the debris size that can cause loss of use of a sector, while the larger the sector size, the more data that can be lost with loss of a sector. Continued density increases benefit the manufacturers far more than users. The stronger ECC is needed to compensate for the increased fragility and risk. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-26 a las 12:29 -0400, greg.freemyer@gmail.com escribió:
Just one comment
"However, by modifying the length of the data field through the implementation of Advanced Format using 4096-byte sectors, hard disk drive manufacturers could increase the efficiency of the data surface area by five to thirteen percent while increasing the strength of the ECC."
So, sticking with 512 byte sectors effectively wastes 5 to 13% of the potential storage on the platter due to inefficient headers/footers.
You should buy what works best for you, but I suspect over 95% of users go with the default 4kb filesystem page.
No, that doesn't matter at all to a user. If I buy a 1 TB disk, it will have 1 TB useful space no matter if organized internally in 512B blocks or 4K blocks. It only matters to the manufacturer, that he needs less surface for providing the same storage space. Hopefully that is translated to the users in lower prices. Maybe not. Of course, the space needed by the filesystem (whichever it is) for organizing the available space in indexes will be larger the smaller is the assignation block. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYG5VIACgkQja8UbcUWM1z/VQD/S6UqrFU42YhLJ7CFqw8qt2ls aJi5IuxIcpWv5MWwcNABAIo5qTu3xJI2RLYgNtM4i4mblGqTaFb5Vq9d6a+HqHuo =A7FK -----END PGP SIGNATURE-----
On September 26, 2015 2:34:49 PM EDT, "Carlos E. R." <carlos.e.r@opensuse.org> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
El 2015-09-26 a las 12:29 -0400, greg.freemyer@gmail.com escribió:
Just one comment
"However, by modifying the length of the data field through the implementation of Advanced Format using 4096-byte sectors, hard disk drive manufacturers could increase the efficiency of the data surface
area by five to thirteen percent while increasing the strength of the
ECC."
So, sticking with 512 byte sectors effectively wastes 5 to 13% of the
potential storage on the platter due to inefficient headers/footers.
You should buy what works best for you, but I suspect over 95% of users go with the default 4kb filesystem page.
No, that doesn't matter at all to a user. If I buy a 1 TB disk, it will
have 1 TB useful space no matter if organized internally in 512B blocks
or 4K blocks.
It only matters to the manufacturer, that he needs less surface for providing the same storage space. Hopefully that is translated to the users in lower prices. Maybe not.
As a consumer, it's that I get more capacity for my money. Recently I've been buying 1TB usb drives for $60 or so (at Costco). Yesterday I bought a 1.5 TB usb-3 drive for $59 (at Costco). That's the cheapest I've ever seen them. It's also a size I don't recall seeing often. So, things like 4KB sectors get us as consumers more for our money. I haven't checked the specs, but I'm open to a bet that these 1.5TB drives are 4KB sector drives. Greg -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-26 a las 15:25 -0400, greg.freemyer@gmail.com escribió:
On September 26, 2015 2:34:49 PM EDT, "Carlos E. R." <> wrote:
So, things like 4KB sectors get us as consumers more for our money.
I doubt it. The monsoon season has more influence :-p - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYHSokACgkQja8UbcUWM1wsLAD+IZjQKfXZolKkjhmejoSRv0xR A3j53vN/5Tb+VX/q72YA/idGdkcj2987BH0zvcJ78geD+YUWqn9wFplDk4sWW64U =ADGN -----END PGP SIGNATURE-----
Felix Miata wrote:
It's a seriously good thing storage densities continue to rise, and unit cost continues downward, but they don't necessarily interplay nicely with smaller sizing paradigms WRT backup strategies, or size limitations of backup media.
??? You lost me on this one. W/a backup, you usually concatenate all of the files -- so it doesn't matter if the block size of the source is 512b or 4k, the only thing backed up is actual size of the data (or less if you still use compression). So the really, wouldn't matter if you went to a 1MB block size WRT backups, as the backups only store the original data -- not the 'slack'.
Linux went to a 4KB page for almost all filesystems over a decade ago.
That's where defaults went. Not everyone uses defaults. I have far more filesystems using 1k blocks than those using larger. I use larger only where A/V media and iso files go if on 512b/s disks, which I have far more of than 4k/s disks.
--- Today, but disks with minimum physical block size of 4k/block have been around for more than 5 years, and most vendors are shifting to 4k block sizes exclusively. How many of those 512b-native sector size disks were purchased in the last 5 years? Even SSD's aren't immune from this as they also find more efficiencies in shifting to 4k minimum r/w sizes. Now you can argue that it slows down I/O for those small files, since to read or write 1 byte, the disk has to read or write 4k. But tons of small files are a reason why MS switched to a format of program resources that are stored in 1 file in a library format. So instead of 1-icon / widget, you load the oxygen.so (or dll on win), and load the entire library at one time. Now it doesn't matter whether you use 512b or 4kb, since the entire set is in 1 file, it's still 1 read for an entire set of icons. Same thing with 'config' files. Unix is till using .rc/.config files, but MS switched to putting all config files for a system or a user in 1 place -- a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
So, yes, space wasting!
Not if the storage was optimized to begin with by grouping -- with access as easy as access to a mountable file system (pointing to /cygwin's /proc/registry tree). Just saying -- that much of that space wasting was due to space-wasting design. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Linda Walsh wrote:
??? You lost me on this one. W/a backup, you usually concatenate all of the files -- so it doesn't matter if the block size of the source is 512b or 4k, the only thing backed up is actual size of the data (or less if you still use compression). So the really, wouldn't matter if you went to a 1MB block size WRT backups, as the backups only store the original data -- not the 'slack'.
Exactly, but in the Unix world even backups are often just filesystem copies. Much programs use the filesystem as a database engine, and so they store like every email as a small file (think MailDir) instead of storing it as e.g. some mbox or whatever format. There are many backup scenarios that use e.g. rsync even using hard links to create incremental archives, but the archives are just complete filesystems being stored as duplicates on some other disk or volume. I tend to migrate always to real archives, but it is hard when so many people do the reverse.
Not if the storage was optimized to begin with by grouping -- with access as easy as access to a mountable file system (pointing to /cygwin's /proc/registry tree).
Word. I have felt like this a long time, because even the big game makers all do this. E.g. Blizzard has its own file system inside of their MPQ files since ages past. This was around 2000 that games like Starcraft and Diablo had this structure, and they still do. Many many files is a liability and it actually, believe it or not, cost me 3 months of email that I lost due to a backup solution (IMAPSize) using a MailDir format. A backup using small files is like spreading your belongings across a soccer field and then thinking they are safe. They proved not to be :(. I hate it, but there was no other (and still is not) backup solution at the time. There aren't many any IMAP backup tools. I guess I just have myself to blame for being stupid. The registry could be improved by making it more of a cascading system such that applications can be removed from the addition-set at will, but still, I agree that it functions quite well as opposed to many small files. However for configuration it is not really the same as for resources, because the amount of configuration files is rather limited on any Unix/Linux system. The registry is also almost impossible to maintain by hand (particularly, for instance, if you have suffered some malware infestation that you are trying to clear out). Let's hunt and kill something that an automated program created, but now you have to rid it by hand :(. Okay, stupid again.
Just saying -- that much of that space wasting was due to space-wasting design.
This should be in the Book of all Truths. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Content-ID: <alpine.LSU.2.20.1509271413290.25408@zvanf-gvevgu.inyvabe> El 2015-09-27 a las 00:53 -0700, Linda Walsh escribió:
a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
Not if you use a filesystem like reiserfs, which is designed for that precise usage: database on the filesystem structure :-) I remember what they said at the time: that you could have a million files in a single directory, and access any of them instantly. Well, the "instant" defined by rust; it would be in a flash on flash media ;-) - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYH3moACgkQja8UbcUWM1zwIwD/fIkukiqw/AeDAQFg1QdWE93c ZbdLSJHD6rrAWstVBJcA/3hMEapqEIiWjhIGR2a0viqFDlI9QvIn6v66wuimHK8z =/9k+ -----END PGP SIGNATURE-----
Le 27/09/2015 14:17, Carlos E. R. a écrit :
Not if you use a filesystem like reiserfs,
it's a bit waste of time recalling reiserfs so often. I likes it a lot, but it's a dead end, as nobody cares of it now. we have to find other solutions jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 09:13 AM, jdd wrote:
Le 27/09/2015 14:17, Carlos E. R. a écrit :
Not if you use a filesystem like reiserfs,
it's a bit waste of time recalling reiserfs so often. I likes it a lot, but it's a dead end, as nobody cares of it now.
we have to find other solutions
I agree in principle; in practice its turned out to be a remarkably well designed and implemented FS and the lack of 'support' doesn't seem to hurt its use or disturb many of its proponents. But BtrFS is not a solution of ReiserFS EOL. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 07:40 AM, Anton Aylward wrote:
On 09/27/2015 09:13 AM, jdd wrote:
Le 27/09/2015 14:17, Carlos E. R. a écrit :
Not if you use a filesystem like reiserfs,
it's a bit waste of time recalling reiserfs so often. I likes it a lot, but it's a dead end, as nobody cares of it now.
we have to find other solutions I agree in principle; in practice its turned out to be a remarkably well designed and implemented FS and the lack of 'support' doesn't seem to hurt its use or disturb many of its proponents.
But BtrFS is not a solution of ReiserFS EOL.
I too have had good experiences with Reiserfs over the years. But I'm curious, why is Reiserfs a "dead end"? I get it, nobody cares, but why? Is it because Hans is a knuckle-head? Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Lew Wolfgang wrote:
I too have had good experiences with Reiserfs over the years.
But I'm curious, why is Reiserfs a "dead end"? I get it, nobody cares, but why? Is it because Hans is a knuckle-head?
When you get convicted and end up in jail, your life is gone. It is just over. But you are not dead yet. I think it is this stale-mate that causes the project/product to also end up in a stalemate. You could call it hibernation. That is just my explanation. People usually don't consider what happens to you or your life if you end up in jail. Or something similar. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 27/09/2015 19:09, Xen a écrit :
But I'm curious, why is Reiserfs a "dead end"? I get it, nobody cares, but why? Is it because Hans is a knuckle-head?
after all, may be I'm wrong: http://marc.info/?l=reiserfs-devel&r=1&b=201509&w=2 may be Reiser was not very friendly? https://lists.debian.org/debian-devel/2003/04/msg01638.html But is present reiserfs on openSUSE V3 or V4? https://en.wikipedia.org/wiki/Reiser4 https://en.wikipedia.org/wiki/ReiserFS is it worth the work? http://www.phoronix.com/scan.php?page=article&item=linux-40-hdd&num=1 jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 19:27 +0200, jdd escribió:
But is present reiserfs on openSUSE V3 or V4?
3 in the kernel, stable, little or no maintenance. 4 maybe as an extra, untested or something. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYIMg8ACgkQja8UbcUWM1xgIQD9H4+YsEaMT/nSXdU5qL4tzYkh l1XMrUQOfCvhRAEy0kgA/384pSIWExxlekFoRivDgi6IIQfzn4SD/VWANihsvcxu =QTj7 -----END PGP SIGNATURE-----
On 09/27/2015 10:09 AM, Xen wrote:
On Sun, 27 Sep 2015, Lew Wolfgang wrote:
I too have had good experiences with Reiserfs over the years.
But I'm curious, why is Reiserfs a "dead end"? I get it, nobody cares, but why? Is it because Hans is a knuckle-head?
When you get convicted and end up in jail, your life is gone. It is just over. But you are not dead yet.
I think it is this stale-mate that causes the project/product to also end up in a stalemate. You could call it hibernation.
That is just my explanation. People usually don't consider what happens to you or your life if you end up in jail.
Or something similar.
Right about Hans, but if Reiserfs has so many good design features, why wasn't it forked? After all, it happened to ssh. Or, is there some licensing issue? Or did Hans' personality contaminate the code? BTW, I once got a snarky email from Hans in response to a question about a Reiserfs-intrinsic dump program. I was fond of a variant of the Grandfather-father-son dump strategy at that time and he told me to get my head out of my ... and use tar instead. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2015-09-27 22:26, Lew Wolfgang wrote:
Right about Hans, but if Reiserfs has so many good design features, why wasn't it forked? After all, it happened to ssh. Or, is there some licensing issue? Or did Hans' personality contaminate the code?
No, no. R 3 is fully in the kernel, and as far as the original developers are concerned (ie, Mr Reiser and his group), it is out of their hands completely. They instead focused in R 4. However, when Mr Reiser got imprisoned, his group tried to continue, and did so, but without his leadership they seem to progress very slowly. Maybe he was a brilliant designer, after all. So reiserfs version 4 does exist and is being developed. If you search for "reiser" at opensuse, you see it. However, I have not investigated how to install it, or how "safe" it is.
BTW, I once got a snarky email from Hans in response to a question about a Reiserfs-intrinsic dump program. I was fond of a variant of the Grandfather-father-son dump strategy at that time and he told me to get my head out of my ... and use tar instead.
Doesn't surprise me much. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On Sun, 27 Sep 2015, Lew Wolfgang wrote:
Right about Hans, but if Reiserfs has so many good design features, why wasn't it forked? After all, it happened to ssh. Or, is there some licensing issue? Or did Hans' personality contaminate the code?
Well personally I feel that ReiserFS is just not right for me. Just my impression. I could probably back it up if I knew more about it. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 09:54 -0700, Lew Wolfgang escribió:
But I'm curious, why is Reiserfs a "dead end"? I get it, nobody cares, but why? Is it because Hans is a knuckle-head?
R 3 does not expand well. For instance, it uses a single thread for all the filesystems it mounts. R 4, well, it doesn't come in the system by default, you have to add it later, I think. It is very slowly evolving. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYIMn0ACgkQja8UbcUWM1yPAQD/dr1PM/2l0o2jP/TBPfoWQ/z6 V9uyeJLh8KnZHZNS470A/04pYhmrCgQD5rBa0lPWXrk6gKIO+FuX2qQsdhyBY4Zn =IK1N -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 15:13 +0200, jdd escribió:
Le 27/09/2015 14:17, Carlos E. R. a écrit :
Not if you use a filesystem like reiserfs,
it's a bit waste of time recalling reiserfs so often. I likes it a lot, but it's a dead end, as nobody cares of it now.
we have to find other solutions
Yes, I agree, we need other solutions. That's what worries me, that we still do not have them. That the lessons learned with reiserfs have not been reused. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYIBUEACgkQja8UbcUWM1z82wD+NAwnehiOeKO9J/H7al7qxFGc ndL8EKBZ6ZrnDt4ZhswA/RYiDkfz6nGrtFMApncgO8CzoGYZbYtLjAYb8RuvB03p =a191 -----END PGP SIGNATURE-----
On 09/27/2015 03:53 AM, Linda Walsh wrote:
Felix Miata wrote:
It's a seriously good thing storage densities continue to rise, and unit cost continues downward, but they don't necessarily interplay nicely with smaller sizing paradigms WRT backup strategies, or size limitations of backup media.
--- ??? You lost me on this one. W/a backup, you usually concatenate all of the files -- so it doesn't matter if the block size of the source is 512b or 4k, the only thing backed up is actual size of the data (or less if you still use compression). So the really, wouldn't matter if you went to a 1MB block size WRT backups, as the backups only store the original data -- not the 'slack'.
Linda, you've made a generalization that isn't valid. Not everyone uses the same backup strategy. Yes, if your backup is to convert all the files into a tarball and write that out to long term media, you are correct. There are few analogues to tarballing as well. But some people simply do disk to disk and archive the disk. Some tape methods preserve the gaps in the file. Its one thing to dump your database to text file, a series of SQL statements, and back that up, but some people quite literally back up the database. For example, if I back up /var/lib so as to preserve a lot of dynamic configuration and settings (such as DNS, DHCP, the Yast/zypper databases) I also back up the MySQL files, which are also "sparse". Some are literally sparse: unassigned blocks. Some have fixed sized fields that are not full. It doesn't matter. You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness. It depends on the user and the backup strategy. We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 27/09/2015 15:49, Anton Aylward a écrit :
It depends on the user and the backup strategy.
you have the most valid word here. But * not every people have any backup strategy (most do not make any backup at all) * building a backup strategy is extremely difficult (really!) (as this thread shows :-)) so what I say is: * use redundancy. Use several hardware supports (hard disks, usb pen, Blu-Ray disks... whatever fits your needs), but also several places: not all in the same room, not in the same town, not in same planet (hum... :-) and be prepared: you will loose data sooner or later and survive :-) thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 09:58 AM, jdd wrote:
Le 27/09/2015 15:49, Anton Aylward a écrit :
It depends on the user and the backup strategy.
you have the most valid word here. But
* not every people have any backup strategy (most do not make any backup at all)
Well, there is that ...
* building a backup strategy is extremely difficult (really!)
(as this thread shows :-))
Most people don't need a 'strategy', they just need a workable tactic. Sadly most backup packages are strategic rather than tactical. I use a simple tactical method. I have partitions that are never more than 5G in size. That means I can burn them to a DVD. DVDs are cheap - about $05/each. You don't need high grade ones. The real trick is data organization. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, jdd wrote:
you have the most valid word here. But
* not every people have any backup strategy (most do not make any backup at all)
* building a backup strategy is extremely difficult (really!)
In Windows you have to buy tools really. In Linux you often have to develop them yourself. The latter takes a lot of time, the former also requires some investment. It is not easy to setup, I would ooncur. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 12:11 PM, Xen wrote:
On Sun, 27 Sep 2015, jdd wrote:
you have the most valid word here. But
* not every people have any backup strategy (most do not make any backup at all)
* building a backup strategy is extremely difficult (really!)
In Windows you have to buy tools really. In Linux you often have to develop them yourself. The latter takes a lot of time,
For a novice, neophyte maybe, but after a few years, throwing together a shell script of a couple of lines of perl is something you do almost unconsciously. You don't notice the time. You probably spend more time walking down the hallway to get a coffee. Polishing it for publication with documentation, man page, making it so robust other idiots who don't make the same assumptions as you do, yes that's difficult. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 12:11 PM, Xen wrote:
In Windows you have to buy tools really. In Linux you often have to develop them yourself. The latter takes a lot of time,
For a novice, neophyte maybe, but after a few years, throwing together a shell script of a couple of lines of perl is something you do almost unconsciously. You don't notice the time. You probably spend more time walking down the hallway to get a coffee.
That's mostly because you've already spent years becoming familiar with the available tools so you've already chosen a subset of what you like to work with. This becoming familiar with tools and finding you way is the part that requires the most effort or energy or time. And it is basically a work everyone has to do for himself. There are no shortcuts, unless you are working or engaging with familiar people (friends, likemindeds) whom you can do the work together with. I am still exploring how I can do some automated backups and how I can set up some data replication among hosts, and how I can ensure how it is going to be as painless for me as possible (basically a single call and the system does the rest without me having to pay attention). Linux is usually something that requires a lot of time to set up and get right for yourself. It is really a long investment. Spending years configuring your Linux system is no exception and I think that if you are going to lose the progress you've made at some point (in terms of files you've written) you would have a really terrible setback. It is a bit of a dark area, a spooky room. You are in alien environment and not much is friendly. Now to find your way. It reminds me of some M.U.D. I played. I used to play a MUD written by some fellow students at Uni. That was actually a nice environment and we had friends and one of us wrote a graphical tool to do the navigation through the world. An auto-mapper. But after I tried another mud that was called ROP. Rites of Passage. And I had no one there and everything was dangerous and in the end I walked into some room and got killed instantly. It was just spooky. Command line Linux reminds me of that. Spooky and alien. And unfriendly. Dark. Lonely.
Polishing it for publication with documentation, man page, making it so robust other idiots who don't make the same assumptions as you do, yes that's difficult.
Actually that's a joy to do :). xx. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 01:07 PM, Xen wrote:
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 12:11 PM, Xen wrote:
In Windows you have to buy tools really. In Linux you often have to develop them yourself. The latter takes a lot of time,
For a novice, neophyte maybe, but after a few years, throwing together a shell script of a couple of lines of perl is something you do almost unconsciously. You don't notice the time. You probably spend more time walking down the hallway to get a coffee.
That's mostly because you've already spent years becoming familiar with the available tools so you've already chosen a subset of what you like to work with.
Yes. Experience counts. It clear from your counter points that you don't have that experience. Why not be tactful and elarn?
This becoming familiar with tools and finding you way is the part that requires the most effort or energy or time.
Yes, that's why people go to school, study, have apprenticeships and training schemes. Look at this way: you're going to grow old anyway so why not spend the time learning? Oh, right, you want instant gratification.
There are no shortcuts,
It used to be called "The Royal Road"
unless
No "unless". Study, practice. They say t takes 10,000 hours to master a skill. Better shut up and start. Pike and others wrote a number of "white books" on the way UNIX works. Buy then and work though every example. Don't download the packages, type them in by hand.
Linux is usually something that requires a lot of time to set up and get right for yourself.
And how is that different from learning anything else worth while? You want to play the piano, the violin? You want to play the guitar like Keith Richards? PRACTICE. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-27 a las 13:15 -0400, Anton Aylward escribió:
On 09/27/2015 01:07 PM, Xen wrote:
shell script of a couple of lines of perl is something you do almost unconsciously. You don't notice the time. You probably spend more time walking down the hallway to get a coffee.
That's mostly because you've already spent years becoming familiar with the available tools so you've already chosen a subset of what you like to work with.
Yes. Experience counts. It clear from your counter points that you don't have that experience. Why not be tactful and elarn?
Yep. However, I started writing Linux scripts within weeks of starting to use Linux. It was not that difficult. Actually, similar, but way more powerful, than writing MsDOS batch files. Windows also has a modern scripting capability. A script doesn't have to be complex, have functions and control structures. It can be as simple as a sequence of commands to be executed one after the other, in a list, because you use that sequence often and don't want to make errors. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJMdgACgkQja8UbcUWM1z2lAD+PUTVZUUFY7xHTgWHqBNaD0iy etTd5V6z3SR1KssXOkgBAJ9KOw/FvNKtKZEt87YLxBNgLNEBWVlYfjDoHdxPyDiX =w4kX -----END PGP SIGNATURE-----
On 09/28/2015 08:26 AM, Carlos E. R. wrote:
Windows also has a modern scripting capability.
It does now after many years. I get the impression from what I hear at various meeting s and conferences that it is intended for sysadmins and the like and not for the end user.
A script doesn't have to be complex, have functions and control structures. It can be as simple as a sequence of commands to be executed one after the other, in a list, because you use that sequence often and don't want to make errors.
Its when you get to use pipes and filters that it gets powerful. Even someting as simple as piping the output of 'find' though xargs and grep -- APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums. -- Edsger W. Dijkstra, SIGPLAN Notices, Volume 17, Number 5 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 28 Sep 2015, Anton Aylward wrote:
On 09/28/2015 08:26 AM, Carlos E. R. wrote:
Windows also has a modern scripting capability.
It does now after many years. I get the impression from what I hear at various meeting s and conferences that it is intended for sysadmins and the like and not for the end user.
Personally I detest it. Even opening a PowerShell window is rife with problems (you can hardly read what it says). I seriously don't want to script on Windows. I could never. I'm a guy with experience in Windows since 3.11 almost every day of my life, I have done programming in ASM, Pascal, Basic, Java, PHP, a bit of python, and Bash, and I *cannot* program in PowerShell. I don't want to either. Some hold that it is very powerful and bla bla. Just call me sick (I am) but programming in Bash at least makes me feel like there is hope after all in life :p.
Its when you get to use pipes and filters that it gets powerful. Even someting as simple as piping the output of 'find' though xargs and grep
That is really the only power of Linux. There is also hardly or scarcely anything more powerful than that. I think pipes (named pipes as well) should in some way remain or be the way of interprocess communciation as well. No matter how it should be implemented, the pipe should be the future. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Xen wrote:
On Mon, 28 Sep 2015, Anton Aylward wrote:
It does now after many years. I get the impression from what I hear at various meeting s and conferences that it is intended for sysadmins and the like and not for the end user.
Personally I detest it. Even opening a PowerShell window is rife with problems (you can hardly read what it says). I seriously don't want to script on Windows. I could never. I'm a guy with experience in Windows since 3.11 almost every day of my life, I have done programming in ASM, Pascal, Basic, Java, PHP, a bit of python, and Bash, and I *cannot* program in PowerShell. I don't want to either. Some hold that it is very powerful and bla bla.
Just call me sick (I am) but programming in Bash at least makes me feel like there is hope after all in life :p.
Ditto on that -- I use bash+cygwin to do most of my sysadmin stuff on windows when I can -- including full dumps of the registry -- FROM which I've been able to restore a previous installation's configuration (including installed programs). Ex: Win7 fell over and died when it couldn't find a "xxx.fon" file that it had on its critical file list. If the OS was 'up', it would be trivial to fix, but wouldn't boot w/o that file and since it was a LSI-BIOS-RAID0 using 4 disks there wasn't anyway I could easily take the "system disk" and install it in a linux system to restore that 1 file. At the time, due to some other bug I had no recent sys-image backup, so couldn't restore that from an image. In Win7DVD-repair console was able to rename old windir to windows2, then installed a pristine, unmodified version of win7 into windows. Then I could copy any updates from the old win7dir to the new, then restored registry settings for from the registry dumps -- so all my previously installed programs (still on disk outside of the windir could get all of their registry settings restore). All programs were restored to working order except a few Adobe progs which didn't like some of the changed guids, were restored -- and the adobe progs were fixed w/a call to Adobe support (who issued new licences -- when I explained I couldn't deactivate the old licenses because they were on the windows installation that had 'died' -- so no chance to deactivate them before-hand). Since it was a Dell machine that was BIOS licensed, Windows came up and thought it wasn't licensed, until I went to the system info page -- when it refreshed the license through the BIOS and re-activated. Another time, to upgrade the internal SSD's was able to add an external disk to make disk-image using 'dd' from /dev/sdc -> /dev/sdX. Made sure I could boot Win from /dev/sdX -- then installed new SSD array (faster+33% more space) and copied the image back from sdX -> sdc... At the time the old cygwin tools were 32-bit and couldn't be run from a windows rescue image -- which I had to work around. But new cygwin-64 tools run native and run just fine from a win7-64 repair console, so maintenance that much easier now.
Its when you get to use pipes and filters that it gets powerful. Even someting as simple as piping the output of 'find' though xargs and grep
That is really the only power of Linux.
There is also hardly or scarcely anything more powerful than that.
It's not linux, but the unix tools available on linux OR on cygwin64 -- which you can run under win7's repair console.
I think pipes (named pipes as well) should in some way remain or be the way of interprocess communciation as well. No matter how it should be implemented, the pipe should be the future.
Not always ideal if you are using multiple processes -- even in linux, it's not possible to do 2-way communication over a pipe -- (pair of pipes, yes, but you can't easily setup 2-way communication between processes in standard shell or bash). Example (bit long): Have a script that checks for "duplicate product" RPM's in a dir (but with differing versions)... It is designed to compare the versions between duplicate-named rpms and only keep the latest version. To do that reliably, I need to split off the 'ver+rel' from the rpm to do compares from -- not something you can reliably do via the name of the rpm-file, only. Thus had to call 'rpm -qp w/--queryformat' on each rpm to get their actual N+V+R... Some of the dirs like x86_64+noarch (added together) have over 23,000 entries *without* duplicates. Pulling updates into the same pool, one can end up with >30K entries. To speed things up I split the alpha-sorted list among some 'N' number of processors (N determined by experimentation, as it is also 'disk-bound, but disks are RAID10 so they can handle some parallelism) -- but I needed to have 'momma' process spawn 'N' children. Children were combined by N**.5 collectors. (having 9 children vying w/each other to talk w/momma created too much contention), so results were collected and merged by intermediate procs -- still merging 3 children requiring 2-way communication -- not easily done w/pipes...so children ended up using shared-memory to create files in memory with the collectors reading from shared-memory and using sockets to talk to momma. It would have been really painful to try to do that with pipes alone, since they only buffer ~8k/pipe -- would have been alot of overhead in process switching. Shell wasn't really up to manipulating shared memory, reliable signal delivery or sockets, so ended up w perl. Had the util utilize a "Recycling Bin" concept to "delete" old versions -- using 1 bin/device -- so a delete really becomes a 'rename' (no file copy involved). In another program involving parallelism (a wav->flac that worked on 1 album-directory at a time), I didn't know how long each step would take (vs. above I assumed roughly equal time/step and could assign #steps/child), so in the wav->flac case, I just spawned off as many worker threads as I had CPU's, then used semaphores to manage # workers. As each conversion finished it would release a semaphore and another conversion would start -- so was able to keep all processors constantly busy. Usually took(takes) < 1 minute to convert an album to flac using highest compression settings. In that case a momma process just doles out 1 file at a time to a worker -- and workers used direct-file I/O -- ended up in perl as could convert to flac or mp3 based on args. An earlier version of that used shell, but blindly spawned off conversion processes for all of the tracks in a folder at the same time -- then waited for completion. usually worked pretty well, though often more thrashing w/more tracks than cpu's. Shell is good for many things... wrote a first snap-shot generator in shell... but it wasn't fun to maintain or extend, so it went to perl... still not that fun to extend, but at least it was more reliable. Shell doesn't do as good a job handling signals -- especially in bash4.3 where user-signal handlers stopped being asynchronous and are now only handled upon pressing a key in an input line (piss-poor design for use in automation to require user key-presses in order to handle async events (like signals)). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Not sure why all good engineers seems to be women. I was talking once to a woman who works for Facebook and was involved with PHP, what's her name? She made sense. I don't know. On Mon, 28 Sep 2015, Linda Walsh wrote:
Since it was a Dell machine that was BIOS licensed, Windows came up and thought it wasn't licensed, until I went to the system info page -- when it refreshed the license through the BIOS and re-activated.
BIOS licenses, right! That's how my root kit works :P. You seem to know a lot. Like, a lot lot but I guess I'm just a novice here. Maybe I should go back to school, start over with life and go to primary school again ;-). First attempt fails. Try again.
But new cygwin-64 tools run native and run just fine from a win7-64 repair console, so maintenance that much easier now.
Do you mean you run them from your harddrive after setting up path?
It's not linux, but the unix tools available on linux OR on cygwin64 -- which you can run under win7's repair console.
Buh. Cygwin or not cygwin, I never really liked that. I used GnuWin32 tools but I can't say I have ever really used cygwin, even when I had it installed. Stuff didn't work. I don't know why. I think I had trouble with e.g. slashes in filenames (backslashes). A think I tried to do in cygwin just wouldn't work whereas in Linux it was or would have been easy and flawless. I gave up.
I think pipes (named pipes as well) should in some way remain or be the way of interprocess communciation as well. No matter how it should be implemented, the pipe should be the future.
Not always ideal if you are using multiple processes -- even in linux, it's not possible to do 2-way communication over a pipe -- (pair of pipes, yes, but you can't easily setup 2-way communication between processes in standard shell or bash).
I'm not sure. In principle the concept of a pipe extends beyond one-way and can be multiplexed and all that. I've never done any multi-process or IPC communication myself. Let's say my experience is limited to doing these things in Java with threads. But that's not, that's hardly something you can just create in a short time. Java is not exactly suited for scripting. I also have no perl experience which makes it a bit hard. Cool how you design stuff. I like your designs, but then I already knew that kinda in advance that I would, because I / we agree on so many things. I will need to find out at some point how these things are being done. For now the only machine I use for flac -> mp3 has one core :P :P :P :P (is VPS).
It would have been really painful to try to do that with pipes alone, since they only buffer ~8k/pipe -- would have been alot of overhead in process switching.
Muh, you know 30x as much as I do. I feel... rather jealous really :) hahas. Damn, I thought I would have been more experienced by now. Things didn't really go as intended. That's why I will start over :p :). I can't live with this jealousy all the time.
Shell is good for many things... wrote a first snap-shot generator in shell... but it wasn't fun to maintain or extend, so it went to perl... still not that fun to extend, but at least it was more reliable. Shell doesn't do as good a job handling signals -- especially in bash4.3 where user-signal handlers stopped being asynchronous and are now only handled upon pressing a key in an input line (piss-poor design for use in automation to require user key-presses in order to handle async events (like signals)).
Heh, good to know. I would have writtein it in Java or Python but then I don't even know how these thigns (well, Java) would be able to communicate with "a system". Anyway, thanks for the message. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Xen wrote:
Not sure why all good engineers seems to be women.
Maybe it's there are so few in the field -- that of the ones that have stayed in the field for any length of time tend to do so because they really liked the field? Vs. w/ males, because there are so many in the field, the good ones don't stand out as "memorable-enough" from the crowd? I don't know about other woman, but unlike a bunch of people who entered the field in the late 1990's and early 2000's because of the "dot com" bubble and high salaries, I really am an engineer & scientist -- w/an engineering degree in Computer Science. My alma mater had 2 Computer Science degree tracks: one in the Liberal Arts school that had more emphasis in math, and the other from the Engineering school with more emphasis on systems and OS design. Though, more often than not, I don't think I'm that good as much as I'm able to make mistakes and correct them more quickly than most. More often than not, I try not to get hung up on things that don't work, and instead try to focus on fixing them...
But new cygwin-64 tools run native and run just fine from a win7-64 repair console, so maintenance that much easier now.
Do you mean you run them from your harddrive after setting up path?
If you are on a 64-bit machine, 32-bit programs don't run 'natively, but use a special 32-bit adaptation layer (or subsystem) layer that resides in the C:\windows\syswow64 (Sys-Windows-on-Windows64). If you are boot into the "recovery console", used for system rescue, programs that depend on other add-on layers/subsystems like the GUI, POSIX, and SysWoW64 can't run as those extra subsystems aren't loaded.
It's not linux, but the unix tools available on linux OR on cygwin64 -- which you can run under win7's repair console.
Buh. Cygwin or not cygwin, I never really liked that. I used GnuWin32 tools but I can't say I have ever really used cygwin, even when I had it installed. Stuff didn't work. I don't know why.
--- The gnuwin32 tools don't use or try to provide a 'Posix-compatible' layer, whereas Cygwin does try to provide such a layer. I've taken some of the linux source rpms and build the binary-rpms using 'rpm[/build]' on Cygwin -- because many of the same linux-like utils are there (bash and most the command-line utils, vim/gvim -- even some of the linux desktops have been ported.
I think I had trouble with e.g. slashes in filenames (backslashes). A think I tried to do in cygwin just wouldn't work whereas in Linux it was or would have been easy and flawless. I gave up.
--- Cygwin uses 'slashes', same as linux, the Gnuwin32 and Gnuwin64 tools try to work with the Win-backslash type paths. I've yet to see a gnuwin-version of bash there, though, but you do have bash w/cygwin. Cygwin isn't as fast as running linux native, but is comparable in the user-code and sufficient for shell scripting.
I think pipes (named pipes as well) should in some way remain or be the way of interprocess communciation as well. No matter how it should be implemented, the pipe should be the future.
Not always ideal if you are using multiple processes -- even in linux, it's not possible to do 2-way communication over a pipe -- (pair of pipes, yes, but you can't easily setup 2-way communication between processes in standard shell or bash).
I'm not sure. In principle the concept of a pipe extends beyond one-way and can be multiplexed and all that. I've never done any multi-process or IPC communication myself. Let's say my experience is limited to doing these things in Java with threads. But that's not, that's hardly something you can just create in a short time. Java is not exactly suited for scripting.
True...but even just '2-way communication' is hard in bash -- i.e. you can't easily setup a pipe to 'sort' and read the output from the same process. I think 'co-processes' are supposed to allow that, but I never got them to work reliably.
I also have no perl experience which makes it a bit hard.
If you start out simple in perl -- and use it for 1-liners, it's not that hard. Perl started out being a combination of shell+awk+sort+sed+tr -- all the common unix utils but in 1 program. Perl-V added some basic Object-Oriented support which allowed it to be more useful as a general purpose programming language, but the language has mostly stagnated since V5.6 (it's now at 5.22).
Cool how you design stuff. I like your designs, but then I already knew that kinda in advance that I would, because I / we agree on so many things. I will need to find out at some point how these things are being done. For now the only machine I use for flac -> mp3 has one core :P :P :P :P (is VPS).
Even w/1 core, you can improve on efficiencies (though more worth your money to get more cores, usually) as file-I/O isn't usually handled by the main processor but by a disk-controller that can do DMA in background while the foreground compute process does the compute stuff (wave->flac or wave->mp3).
It would have been really painful to try to do that with pipes alone, since they only buffer ~8k/pipe -- would have been alot of overhead in process switching.
Muh, you know 30x as much as I do. I feel... rather jealous really :)
These days, the amount of stuff to learn is increasing exponentially with 'old knowledge' often getting outdated.
Damn, I thought I would have been more experienced by now. Things didn't really go as intended. That's why I will start over :p :). I can't live with this jealousy all the time.
Most the stuff I've learned I've learned on my own trying to solve my own problems -- but alot of it I've learned by building and adding on features to other people's programs -- if you have an 'itch' to scratch with some open-source program, you can often compile it and make little changes to see how things work. Later on, you make bigger changes...Hardest is writing programs from scratch -- since you have to build a "virtual framework" first (that's the design part -- when you have the most freedom and most opportunity to go in a wrong direction). Even though I've had tons of experiences going the wrong way, I still increased my knowledge and learning in those areas.... Problem today, is most people want it done right the first time and take any 'side' ventures as a waste of time -- even though they usually aren't because of the additional stuff you learn.
Heh, good to know. I would have writtein it in Java or Python but then I don't even know how these thigns (well, Java) would be able to communicate with "a system".
Well, have to say writing my snapshot prog in shell, 1st, was a learning experience! ;-)
Anyway, thanks for the message.
At least 1 person liked it.... most can't wade through my verbosity. ;-)
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 28 Sep 2015, Linda Walsh wrote:
Maybe it's there are so few in the field -- that of the ones that have stayed in the field for any length of time tend to do so because they really liked the field? Vs. w/ males, because there are so many in the field, the good ones don't stand out as "memorable-enough" from the crowd?
Could be. But I was more alluding to a sense of "I agree" or "I like". My experience in software (and I haven't been dealing with like-minded people for a long time) is just this... bitter feeling of everyone being of a different opinion and always having to fight to get your point (any point) across. I just don't agree with many of the things of FOSS. Maybe this is getting off-topic or off-list btw. The fact is that I see a lot of "rational" opposition to my ideas, or at least a rational mindset, you see it also if you play computer games, you come across men (always men, mostly) who treat what they do (or what you do) in the game as a rational enterprise of yield or profit maximisation. They don't care whether what they do is FUN, as long as it yields the best results in the shortest time. You see it for example with the game of Diablo 3 where people habitually sacrifice their "feeling good" while doing something just to get to some "feeling good" as a result of rewards gained. In the game of world of warcraft this translates to most males not talking in "group dungeons" and being extremely testy and quick to leave because any hindrance while doing the "challenge" means time wasted so they just leave and enter a new one. They get upset, angry, when their time is not being spent on getting loot or experience, but rather on doing something that might or could or would have been easier in a "better group". The fact that you can never meet friends that way (except perhaps the people you might keep running into because they follow the same strategy) doesn't matter to them. I met most people I met there in "time off" areas where I just go up to someone and talk. In that particular game also, but that is perhaps an entirely different subject, the game mechanics have changed to favour such behaviour. Prior to the dungeon "finder" change in the 2nd expansion (Wrath of the Lich King) it was much easier to have pleasant, long-running dungeon groups and runs where there was a lot of joking around and ease of mind. So to get back to linux or software engineering: I tend to get away or diversge from a mindset that says that results should be the indicator of choice. I mean that creation comes from inspiration and insight and the choice you make you make because you WANT to make them. And not because of the "results" they promise. In a sense that I am "at cause" in my creations and not "at the effect" of them. I do not let material conditions dictate how I want my system to be, in an important way. I see you doing the same. You make choices because you love them, not because it has to be done in a certain way (according to some other people or some established truth or dogma) and you look beyond meagre improvements or gains towards the bigger picture where you have to "not jump to conclusions so easily" so you can discern the more important aspects at play. In this way you can design systems that have much better yields in the long run because you can look past the short term gains that other people feel "compel" them to make certain lesser choices. The whole thing about life is that short-term conditions NEVER compel you to do anything because there is always a bigger picture. And if you can rest easy and confident you will be able to see that. In that sense material conditions never dictate your choice; on the other hand, your choice may in the end start dictating material conditions very well. And I often feel like some "lone voice in the desert". And I look around and pick up a rock, and I like it.
I don't know about other woman, but unlike a bunch of people who entered the field in the late 1990's and early 2000's because of the "dot com" bubble and high salaries, I really am an engineer & scientist -- w/an engineering degree in Computer Science. My alma mater had 2 Computer Science degree tracks: one in the Liberal Arts school that had more emphasis in math, and the other from the Engineering school with more emphasis on systems and OS design.
I'm not really comparing myself or you to those kind of people. They don't really matter to me. I was self-taught at the age of perhaps 8 or 9. I wrote you an email, remember? I believe I did... expressing some of these same sentiments.
Though, more often than not, I don't think I'm that good as much as I'm able to make mistakes and correct them more quickly than most. More often than not, I try not to get hung up on things that don't work, and instead try to focus on fixing them...
Yeah yeah. Trust me, the way you go about or relate what you do, quietly and quickly indicates that you are pretty much in a strong flow, not letting things get in your way, and it seems you have your systems very much the way you want, and you seem to be moving faster than most.
Do you mean you run them from your harddrive after setting up path?
If you are on a 64-bit machine, 32-bit programs don't run 'natively, but use a special 32-bit adaptation layer (or subsystem) layer that resides in the C:\windows\syswow64 (Sys-Windows-on-Windows64). If you are boot into the "recovery console", used for system rescue, programs that depend on other add-on layers/subsystems like the GUI, POSIX, and SysWoW64 can't run as those extra subsystems aren't loaded.
I meant like if that recovery console is just the shift-F10 thing, and if or whether you use that to use your cygwin, which obviously needs some PATH to be set up to use it? Or perhaps I don't understand cygwin and you merely need to run some (shell) binary.
The gnuwin32 tools don't use or try to provide a 'Posix-compatible' layer, whereas Cygwin does try to provide such a layer. I've taken some of the linux source rpms and build the binary-rpms using 'rpm[/build]' on Cygwin -- because many of the same linux-like utils are there (bash and most the command-line utils, vim/gvim -- even some of the linux desktops have been ported.
That's quite astonishing. I don't also see much the use. But I take it you use those programs in Cygwin as well.
True...but even just '2-way communication' is hard in bash -- i.e. you can't easily setup a pipe to 'sort' and read the output from the same process. I think 'co-processes' are supposed to allow that, but I never got them to work reliably.
Not saying it should be bash. Redirection in bash beyond the simple is extremely difficult and counter-intuitive anyway. I mean you can do stuff there that you need a computer to analyse. Even a simple (redirect err to out and save out in 3, and then once you have used the result, point 3 to 1 again) is something I cannot remember how to do, I have to copy it from my existing sources. I also couldn't manage even after 2 or more hours of study how to simply save some output in a "virtual file" (not a named pipe) within my bash script and use it in another part of the same script. It is very hard in Bash to return variables, because most of what you do happens in a sub-shell. So you either do not use any $( ... ) calls, or you store results in global variables. I haven't found anything else yet.
If you start out simple in perl -- and use it for 1-liners, it's not that hard. Perl started out being a combination of shell+awk+sort+sed+tr -- all the common unix utils but in 1 program. Perl-V added some basic Object-Oriented support which allowed it to be more useful as a general purpose programming language, but the language has mostly stagnated since V5.6 (it's now at 5.22).
It's not like I'm really complaining about not having perl skill, I also do not want it at this point because the things you do is not something I really need to do at this point or perhaps for a long time to come. I was just a bit awed by it. My apologies ;-). Like, when I was still young and more stupid I printed out both the Picking up Perl manual and Diving into Python and in the end I never read them because really I was not interested in that after all. And I think I threw both away. I had no practical needs for them. I usually or mostly or principally only learn when I have a practical need to do something right now this instant, or perhaps there is one most urgent thing to do and it requires a bit of study, but I never study without a practical need.
Even w/1 core, you can improve on efficiencies (though more worth your money to get more cores, usually) as file-I/O isn't usually handled by the main processor but by a disk-controller that can do DMA in background while the foreground compute process does the compute stuff (wave->flac or wave->mp3).
Yeah yeah. For the few times I do this stuff, you can bet learning how to write this sort of thing would be a very bad investment at this point ;-).
These days, the amount of stuff to learn is increasing exponentially with 'old knowledge' often getting outdated.
That's a shame and it is not even necessary. It is pointless to throw away old stuff that works, but a lot of people (also in Linux) seem to be doing that as a form of "have to improve" craze. Even if the improvement is not an improvement they are still going to do it. *Most of what happens these days in society is regression*. I will say it again more loudly. (whispers) "most of what happens these days in society in order to progress, is actually a regression". I will say that about 70-80% of improvements these days are actually a detriment. And there are very few that see clearly because they are led by false assumptions of the results of their actions actually being good or helpful or pleasant when they are not. And I just have this impression that it is mostly men doing this. Not wanting to betray my own sex, and I am not clear on that. And women probably follow suit just easily. Think of the iPads and stuff, you can't do anything truly worthwhile on that and they don't notice so much they are trapped in a spell of "embazzlement" (don't know) of how pretty everything is and what not. But it also signifies for them a bleakness and an escape into something at least moderately fun (or addicting) out of a life that doesn't promise them much. I don't want to abandon my fellow men but this rational goal-seeking without regard of actual experience of real people (or even themselves!) and a close-mindedness on short-term goals (are you really benefitting from it?) (are material rewards more important than e.g. meeting friends?) (is it that important that that thing will do X or Y when people may not even have asked for it?) (why are you improving something without listening to your users?) (who is really benefitting from this? who are you working for really?). It is a pervasive sense I have these days and in every instance the rational mindset is to blame. My mother once said she never knew a more rational person than me. I am a computer scientist/programmer, but I am also a way-finder. I sacrifice much at present just to enable some other people in my life to find the way themselves. Quite stupid I must say. I would rather have done it myself. What was I thinking?
From this stems my jealousy by the way: my own sacrifice in life.
Had I not done that, I would not have been jealous, because I would have gained what you have. So you can see already that the sacrifice is not really paying off. And the rational mindset is all about sacrifice. Usually it is about sacrificing your joy today for the benefits of tomorrow. And the irony is that not only do they create a destitute present, tomorrow is also turning into a desert landscape.
Most the stuff I've learned I've learned on my own trying to solve my own problems -- but alot of it I've learned by building and adding on features to other people's programs.
I never got as far yet to start diving into Linux or C-type programs. C, perl, they are rather nasty constructs. Just take the language of the RFCs they are not very readable and they use C-like 'grammar' and 'terms'. It is quite nasty. On the contrary, I am more than happy to dive into any PHP program, for instance. They might or might have changed in the future, but I am not there or no longer there.
-- if you have an 'itch' to scratch with some open-source program, you can often compile it and make little changes to see how things work. Later on, you make bigger changes...
Quite naturally.
Hardest is writing programs from scratch -- since you have to build a "virtual framework" first (that's the design part -- when you have the most freedom and most opportunity to go in a wrong direction).
I usually go right straigt away but it can also mean I want to experiment and learn things myself, so my knowledge or understanding is not good enough to do right away what I think is most elegant, and I get there in steps.
Even though I've had tons of experiences going the wrong way, I still increased my knowledge and learning in those areas....
Seeing how another has done something is like reading a design. And you can always learn from, or be informed by, the designs of other people, because they contain symbols that may be or get unlocked in your mind by seeing them.
Problem today, is most people want it done right the first time and take any 'side' ventures as a waste of time -- even though they usually aren't because of the additional stuff you learn.
Aye and it may mean you use an existing library but you don't know why exactly it is done that way, so now your program depends on external knowledge you cannot replicate. And your program works, but it doesn't feel right. A side venture may just be what you need to get to a higher level, to get some building blocks in place in your life.
Well, have to say writing my snapshot prog in shell, 1st, was a learning experience! ;-)
Ooh, I hope you didn't get paid to do it again and again. Learning experience might start feeling like a punishment then, that way. It is always nice to punish girls :p :p ^^ but all the same it would feel like transporting two heaps of dirt from one place to the other, or perhaps one heap and your mission is to move it, and then move it back again. Now wouldn't it ;-). :).
At least 1 person liked it.... most can't wade through my verbosity. ;-)
I told you in my mail ;-). And perhaps you've noticed by now that there is another person in the room :p. Love ya, bye. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2015-09-28 a las 11:09 -0400, Anton Aylward escribió:
On 09/28/2015 08:26 AM, Carlos E. R. wrote:
Windows also has a modern scripting capability.
It does now after many years. I get the impression from what I hear at various meeting s and conferences that it is intended for sysadmins and the like and not for the end user.
That's because the majority of Windows users are plain users :-) Also because most Windows programs are "mouse oriented". What in Windows parlance is called a "power user" will very probably use scripts. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlYJhcYACgkQja8UbcUWM1wx8QD/dWHJDCgb6o7N+yrrhzaRXdw9 IjJgtkSNxfxe39d5PKsA/3uj15HnPXlfofJ2a5VsxuyzbQtjEXHlZUhephB76Eok =4y7s -----END PGP SIGNATURE-----
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote:
--- ??? You lost me on this one. W/a backup, you usually concatenate all of the files -- so it doesn't matter if the block size of the source is 512b or 4k, the only thing backed up is actual size of the data (or less if you still use compression). So the really, wouldn't matter if you went to a 1MB block size WRT backups, as the backups only store the original data -- not the 'slack'.
Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
That doesn't mean every backup strategy is just as sound. Some have serious deficiencies that others call perks. And most of it revolves around not needing high-level or sophisticated tools. See, in Linux it is often easier to build a weak system on existing tools that seem to do the job, than it is to really build something good but that would require more development. This tendency to reuse what exist and to end up with a solution with the minimal amount of effort is what produces such bad designs all the time. It is not designed based on beauty, but just on economy.
Some tape methods preserve the gaps in the file. Its one thing to dump your database to text file, a series of SQL statements, and back that up, but some people quite literally back up the database.
I guess it would be easier. Backing up the dump might require additional scripting that you do not have in place yet.
For example, if I back up /var/lib so as to preserve a lot of dynamic configuration and settings (such as DNS, DHCP, the Yast/zypper databases) I also back up the MySQL files, which are also "sparse".
Some are literally sparse: unassigned blocks. Some have fixed sized fields that are not full. It doesn't matter.
Does that mean those sparse files are 'registered' on disk as being, like, huge, while their actual space requirement is very slim, but if you were to put it in a tar file it would suddenly take up all that "real space"?.
You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness.
The funny thing is of course that if you compress it it will also become sparse, but in that case you wouldn't be able to restore the sparse file on restoration.
It depends on the user and the backup strategy.
We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way.
She's not saying it is the only way. She is just saying that from a design perspective, it makes much more sense. The choices for these other schemes basically comes down to cheapness. Reusing the filesystem as a database means you don't have to write your own or do your own coding around that. I call it abuse really. There is a well available script that does incremental rsync like I said to another volume or even remotely and keeps a roll of backups in different directories based on hard-links to the first backup (or whatever, hardlinks are really relative). It probably works perfectly except that you might have millions of files and each backup adds a whole bunch of hardlinks. So the filesystem is used as the database for the backup. It is very easy and fast to write such a thing, which is why it is being done. But that doesn't mean it is sound or well thought out. It is just fast to code. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 12:10 PM, Xen wrote:
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote:
--- ??? You lost me on this one. W/a backup, you usually concatenate all of the files -- so it doesn't matter if the block size of the source is 512b or 4k, the only thing backed up is actual size of the data (or less if you still use compression). So the really, wouldn't matter if you went to a 1MB block size WRT backups, as the backups only store the original data -- not the 'slack'.
Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
That doesn't mean every backup strategy is just as sound.
It doesn't need to be. I have a one line backup any one of my less and 5G partitions to 'the cloud' using rsync. That got a few bells and whistles added so it has some defaults and looks at environment variables. Its 'sound' as long as *I* am using it, but is a piece of scrap to anyone else. personal 'oneliners' are like that.
And most of it revolves around not needing high-level or sophisticated tools.
Indeed.
See, in Linux it is often easier to build a weak system on existing tools that seem to do the job, than it is to really build something good but that would require more development.
No "seems" about it. A lot of the time you don't need something sophisticated. And when you do, you can evolve it. That's how many of the more 'complex' applications came about. "A minimis incipe"
This tendency to reuse what exist and to end up with a solution with the minimal amount of effort is what produces such bad designs all the time. It is not designed based on beauty, but just on economy.
Strike "bad" from that. "Adequate" and "sufficient unto the job at hand" is very far from "bad". Each day has its own problems. If a minimalist solution, a quick-and-dirty does the job, you can get on with the real issues, the real pressing problems. Which are more likely to be people problems than technical problems.
Does that mean those sparse files are 'registered' on disk as being, like, huge, while their actual space requirement is very slim, but if you were to put it in a tar file it would suddenly take up all that "real space"?.
In short, yes; in reality it depends on the version of tar/cpio you use. RTFM. Also note that cpio can, like rsync, punt the output over the net to another machine :-)
It is very easy and fast to write such a thing, which is why it is being done. But that doesn't mean it is sound or well thought out. It is just fast to code.
You're not paying attention. It may be sound and well thought out *FOR* *MY* *PERSONAL* USE*. It's just not packaged and generalized for the world at large. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 12:10 PM, Xen wrote:
That doesn't mean every backup strategy is just as sound.
It doesn't need to be.
I have a one line backup any one of my less and 5G partitions to 'the cloud' using rsync. That got a few bells and whistles added so it has some defaults and looks at environment variables.
Its 'sound' as long as *I* am using it, but is a piece of scrap to anyone else. personal 'oneliners' are like that.
I was talking about something more usuable by more people. See, the problem I mentioned earlier is that everyone needs to write his own one-liners, and they are never quite as good as if a few people would sit down and wrote a different thing that served more people. Anything that involves more development can also become a better product. I write better as a form of saying that many designs just require more work. Incremental improvement doesn't just always cut it, often not at all. Naturally as a single person when faced with this need for a solution, your only real attractive choice is to do what you say, to build those "one liners". and maybe you will end up in the position where you say: okay, enough is enough, I want something GOOD now.
See, in Linux it is often easier to build a weak system on existing tools that seem to do the job, than it is to really build something good but that would require more development.
No "seems" about it. A lot of the time you don't need something sophisticated. And when you do, you can evolve it. That's how many of the more 'complex' applications came about. "A minimis incipe"
I would agree with the evolving thing but the only reason you would seem to accept (yes, seem) a minimal solution is because a non-minimal solution (that still agrees with you) is out of reach. And because more sophisticated tools often don't exist, or better practices, or whatever, you are forced to develop that thing on your own. But if you start out from a primitive state, you cannot achieve much. If you start out from a more advanced state, what you can create is much greater. I just have an issue with having to start out from scratch, almost.
Strike "bad" from that. "Adequate" and "sufficient unto the job at hand" is very far from "bad". Each day has its own problems. If a minimalist solution, a quick-and-dirty does the job, you can get on with the real issues, the real pressing problems. Which are more likely to be people problems than technical problems.
I don't find many of the stuff we would talk about here (including my own contraptions) really much beyond "adequate for now given that I cannot achieve anything better anyway". It still might end up being a time waster of some sort, because of the primitive quality or nature or level of progress, of it. If you go about each day as its own problems.... That is like knowing ahead of time that you need to build a house or have a house ready, but instead of making a blueprint, you are just going to accept living in a hut first, and then gradually improving the hut. That is fine if you are really starting out with nothing (you need a place to live) but in the world we live in we often don't need to. We are often not starting out from near scratch. If you did have the "luxury" of living in another house until your project would finish completion, you would use the time you have to build a great house right from the get go instead of evolving it from something supremely primitive in small steps. And as aa result build something much greater with less time. Because your evolved house/hut might still be a patchwork and would not have any elegant design because you needed to make do with what you had at every step.
In short, yes; in reality it depends on the version of tar/cpio you use. RTFM.
That's not the real question I asked. Also I asked you, so there is no need to "RTFM". I thought that acronym had diseased by now but I guess you are an old-living specimen ;-). The real question I asked was whether you'd have a problem if the sparseness needed to be restored from TAR. Yes and I've seen TAR options for sparseness probably. No need to read the manual for something I am not interested in right now (thank you very much).
You're not paying attention. It may be sound and well thought out *FOR* *MY* *PERSONAL* USE*. It's just not packaged and generalized for the world at large.
That's just because you have to build your own tools. I don't think you are paying attention to what I mean, you cunt. I mean that more sophisticated tools can be created when using more planning, but it requires that you don't have to fight off every day's disaster or solve some sort of survival game every day of your life. The whole of Linux seems to be such a thing these days that has seen no aforethought design. It just evolved but nobody really thought about it. It was just the planning of one day at a time; in a sense planning no more than one day ahead. And the end result, thus far, is what I call UGLY. The end result thus far is more of a battle-scarred plains with smoke billowing and tears in the ground, bodies everywhere, rather than some beautiful forest. I would never compare Linux to a beautiful forest. It is dark, the sun doesn't shine, there is smoke in the air. If any area of WoW would need to compare to it, it would be Burning Steppes/Searing Gorge. If you play a multi-user-dungeon text-based without a map it means you end up in a room one room at a time and can never really look ahead. You never really know where you are for lack of oversight. That is the same feeling. Having more oversight requires a bigger or more integrated design of the system. It requires something that in a sense requires a different stage of design versus implementation/development. It requires projecting a vision ahead of time. VISION. Much of Linux/Unix is completetly void of any VISION that looks ahead more than 5 meters. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote: Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
Wrong and that's not what I do. I use the utility designed to go unix FS's
Yes, if your backup is to convert all the files into a tarball and write that out to long term media, you are correct.
Not so primitive -- incremental backups aren't supported by many tars. LT media? I backup to RAID, so a restore of a FS happens at about 200-400MB/s. I.e. I scale my backup media to the size of what I am backing up. As disk sizes have grown, the need for faster backup/restore has increased at the same time.
But some people simply do disk to disk and archive the disk.
Some tape methods preserve the gaps in the file. Its one thing to dump your database to text file, a series of SQL statements, and back that up, but some people quite literally back up the database.
A DB doesn't take 4kb/datum, so it's not the same thing.
You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness.
Not going to argue that. xfsdump and tar can both preserve sparse files.
We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way.
Never said it was. But the original point I made was about poor designs often being at fault for inefficiencies. If someone chooses a poor design for backup -- the same applies. I've tried other backup solutions and ended up returning them for money back when they couldn't restore in a reasonable time. One solution that was great in regards to the granularity of backup -- each change to a file was backed-up, did a file system restore at 100K/s... on a 1G filesystem... would have taken 3-4 days to restore 1 partition. Others have had similar. If I can't restore the lost files within a reasonable time -- minutes to hours for a full restore, it's pointless. Also, I use different methods for different media types -- which is why I don't want to combine "/var, /(root), /usr, /home...etc. They have different types of needs in regards to frequency of update and backup. The only generalization I used was that if an intelligent backup strategy was used, then the problems of a 4k-block size could be minimized. If you can prove a counter case to that, feel free to call my generalization invalid. It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 06:34 PM, Linda Walsh wrote:
Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote: Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
Wrong and that's not what I do.
I am NOT wrong when I say "Not everyone uses the same backup strategy."
I use the utility designed to go unix FS's
Yes, if your backup is to convert all the files into a tarball and write that out to long term media, you are correct.
Not so primitive -- incremental backups aren't supported by many tars.
They only need to be supported by ONE version of tar its its the version that comes with Linux, with Suse, RH, Mageia. I don't know about the non-RPM systems like ubuntu. See http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html and http://www.unixmen.com/performing-incremental-backups-using-tar/ and http://paulwhippconsulting.com/blog/using-tar-for-full-and-incremental-backu... and http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/cha... I never said TAR was 'primitive'. The tar command may be a primitive aka fundamental, in the way that many binaries in /usr/bin are 'primitives' around which we construct more complex scripts.
But some people simply do disk to disk and archive the disk.
You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness.
Not going to argue that. xfsdump and tar can both preserve sparse files.
Indeed. So does cpio, but it does it badly. It writes out sparse files as blocks of zeros, but the "--sparse" option on reading in r4stores them to be sparse.
We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way.
Never said it was. But the original point I made was about poor designs often being at fault for inefficiencies.
There's a joke about a a fighter pilot and transport pilot that tell a lot about attitudes towards 'efficiency'. Often 'efficiency' is confused with speed. if I want speed I'd get more memory and faster CPU with more cores, a faster SSD, or perhaps not even use Linux! IBM has some damn fast database system that go back long before SQL and are used by banks and airlines. Why do you think your credit card swipes come back 'verified' so fast? That's not MySQL you're seeing! That's not RubyOnRail or PHP. "Efficient" can also mean maintainable, sometimes under adverse conditions by poorly trained people. The DC9 has a reputation of being serviceable in jungle considerations with the most primitive of machine shops making spare parts.
If someone chooses a poor design for backup -- the same applies.
its no a poor design, its either adequate or not. I keep saying Context is Everything and it is. My context is not yours. I've designed my system to be able to be quickly backed up or restored using DVDs. Perhaps your multi-terabyte databases would prove inefficient if you tried backing them up in 5G chunks! But in my *context* its quick, easy, and because its not inconvenient, it gets done, so its an efficient /system/. Context is Everything
I've tried other backup solutions and ended up returning them for money back when they couldn't restore in a reasonable time.
And there you have it. its the ability to *restore* that is the crucial issue. For me, the DVDs have a file system image. I can simple 'cp' or 'rsync' any file, set of files or directory or tree that I want, and even do it by patterns, date and any variant of pick-and choose using 'find'. Context is Everything
Also, I use different methods for different media types -- which is why I don't want to combine "/var, /(root), /usr, /home...etc. They have different types of needs in regards to frequency of update and backup.
Yes, that is important. A full system image backup can be very inflexible. it can also be a disaster, as one of my clients once found, if you make a mistake in the command line doing and restore!
The only generalization I used was that if an intelligent backup strategy was used, then the problems of a 4k-block size could be minimized. If you can prove a counter case to that, feel free to call my generalization invalid.
There are many tools where the 4K vs 512 is not the issue. using 'tar' to a tape, the blocking issue is a completely different matter! Its about buffering, not space allocation! What? tape? Well yes, 512 blocks eat a a lot of tape. header and trailed/checksum for each.
It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for.
There you go again, assuming that 'efficient' means 'speed'. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I still have a mail in the pipeline but I just wanted to respond here with short comments. On Sun, 27 Sep 2015, Anton Aylward wrote:
On 09/27/2015 06:34 PM, Linda Walsh wrote:
Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote: Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
Wrong and that's not what I do.
I am NOT wrong when I say "Not everyone uses the same backup strategy."
She meant the first statement, not the second. Poor quoting.
Not so primitive -- incremental backups aren't supported by many tars.
They only need to be supported by ONE version of tar its its the version that comes with Linux, with Suse, RH, Mageia. I don't know about the non-RPM systems like ubuntu.
Incremental tar is pretty primitive. You need a solution wrapped around it, a suite of scripts (or even a single script) to make good use of it. Tar is also rather primitive by default in ways of knowing the progress of your backup. I tried using some SIG feature (signal) but failed thus far. Rsync is lightyears ahead in that regard and that's really only a small step. Actually I was doing just that: writing a tar script (combined with LVM snapshot) in a nice user interface (just readline menus) but I quit doing it for now. I also wrote a script that can extract the dumps from the tar and compare it to an increment tar and then show you the differences between the two (additions, removals, updates) -- something that is not supported or offered by default. Forgot to upload it, it would have been on github by now.
Often 'efficiency' is confused with speed. if I want speed I'd get more memory and faster CPU with more cores, a faster SSD, or perhaps not even use Linux!
What she means is that poor designs cause more work, the way I have been saying. If you spend 30 hours designing something and then 60 building it, you will in the end have a vastly more efficient system than if you spend 2 hours designing it (or not even) and then 30 building it. The reason is because the better system is going to save you time. Often software requires an investment, it pays off in the long run. Perhaps "better" is subjective as you have been indicating. In that case a more objective statement would be that in general the more time you spend in advance, the more time you will save later on. Is that system better? It is a more efficient way of spending time. It is not better. It just saves time. If you have no need for these time savings then the system might not be better (if you are having to be the one building it). Because in that case the investment might not pay off. Typically for personal use the investment/payoff balance is different. As you have indicated. But the more people work together, or products are created that follow designs that are more elegant but require more time to build, the more in the end time is used more efficiently and also more effectively across the board. This is then what we call "good design" or "better design". But if these solutions do not exist and you need something working in 2 days (or less), then obviously it won't be "better" to start designing a long-term system. I mean, I agree with that Anton. The reason I do so much scripting myself is because I sense and see that the ready made solutions are often stuff or things I disagree with and that take time learning. Better then to use the 'primitive' tools that I already know, and I can whip up something myself faster than it takes to learn the tools I disagree with. And the benefit of that is that: a) I get something that is perfect onto me, and b) I become more adept at using what is already there, including libraries I write myself for these purposes.
"Efficient" can also mean maintainable, sometimes under adverse conditions by poorly trained people. The DC9 has a reputation of being serviceable in jungle considerations with the most primitive of machine shops making spare parts.
And you see (perhaps) in Himalayan and Turkistan countries that they all use vehicles that are older and more robust and more serviceable in adverse weather conditions. No fancy modern cars with electronic systems that break down and are not repairable by the common mechanic. They are probably not even common mechanics, they are experts. People in those or such regions need to be able to fix everything they use. But that doesn't really negates, but rather informs or reinforces the fact that proper thinking in advance of construction may yield rewards later on. Just think of any computer game. You can't design or build a computer game in incremental steps. It's gotta have ONE release. It needs to be thought out in advance, completely designed, and then built. There is no time or even a reason to do it in any other way.
its no a poor design, its either adequate or not.
I keep saying Context is Everything and it is. My context is not yours.
If "poor" is subjective and relates to good/bad/better/worse, then perhaps you might be right. But the word is used in a sense of an implicit context. It is assumed (and rightly so, I believe) that the needs of many users converge to a single point. Given then, a common interest, or a shared need across many people, we can call one design poor and the other good. But "poor" is less subjective than "good". Poor also means the opposite of rich. Perhaps a car that can only run for 10 miles is adequate to your needs (or at least you think it is) but that doesn't define it as "rich". It is perfectly clear that a car that can instead run a 100 miles would be a much richer design, most likely. Richness is then a measure of complexity and feature-abundance. It is also a meaure of power and operational possibility, ie. the condition of not needing a fundamental redesign to support a higher level function. If you always design incrementally, you may run into a necessity of constantly redesigning your system because you didn't realise beforehand or take into account higher level needs. A richer design is also a simpler design, and because of its simplicity it can do more. It has less obstacles or hindrances embedded in its design. You can say that car that can do a 10 miles is adequate to your needs (at least today) but you might have been mistaken about that and realize that the next day you need it to be able to do 15 miles, and now you have a problem because you need to redesign and rebuild the thing, whereas if you had been less indigent you would be enjoying the fruits of a richer labour. This is what Linda calls poor and rich design, if I may be so bold. It is really a quite general assumption or idea. I don't think many people would disagree with it.
I've designed my system to be able to be quickly backed up or restored using DVDs. Perhaps your multi-terabyte databases would prove inefficient if you tried backing them up in 5G chunks! But in my *context* its quick, easy, and because its not inconvenient, it gets done, so its an efficient /system/.
Context is Everything
Take care that you are not limiting yourself in what you want to do because or just because you feel your current system *should* be good enough for you. This DVD system... one day you may start feeling annoyed by it but you feel it "should" be adequate. 5GB file systems is also a pretty big concession (or even sacrifice, if I may be so) to make. You have basically designed your filesystem/computer setup/structure around the /SIZE of DVDs/. That is VERY odd thing to do. That constitutes to me as "poor" and that is not something subjective. It is relative to other things that are "less poor" but it is still objective even if it is not absolute. In the absolute, everything is all thing. In the absolute, a thing is both poor AND rich. But in relative terms, to me this comes across as 'poor'. Because it requires you to make so much concessions. So just to be quick I will say that context determines good/bad but it does not determine poor/rich. Relation to other designs determines poor/rich. Adequate is related to needs. Needs may change over time. You may also lie to yourself about your needs and accept a lesser solution than you could have had (based on your experience and what is possible to experience). A poor solution may also keep you stuck in a situation where progressively, or increasingly, your needs are not or not longer met. But it can also mean that you have a reason to deny that your solution is poor.
It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for.
There you go again, assuming that 'efficient' means 'speed'.
Actually she is saying or appears to be saying that "speed" causes "inefficiency" to be less noticable. That does not mean speed and efficiency are the same thing, although in this case efficiency IS related to the time it costs you. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Congratulations, Xen; this is more reasoned, less frantic, than your normal responses :-) On 09/28/2015 06:40 AM, Xen wrote:
What she means is that poor designs cause more work, the way I have been saying.
I would agree but only of there is a "can" in that statement. you later go on to quote the use of older, primitive, less designed cars in fringe areas, where because of their simplicity of design, they are more maintainable than a more sophisticated design. I gave the example of the DC9 similarly. The term "poor" is incorrect. I would be much more likely to grant you an agreement if you guys used the term 'inadequate'. There are plenty of 'quick and dirty' designs that are perfectly adequate. Myself, Linda, many here have enough experience to throw together a quick shell script on line, no need for a file, or a perl/sed/awk one line that gets the job done. No need for hours of thought or hours of development. "Use once, throw away". We come from a UNIX generation where the CLI is natural; people coming from a GUI world like Windows (or possibly the MAC) don't naturally think that way, don't naturally see a quick solution. Constructing an application with the GUI takes more thought,effort and time. Perhaps that's the basis for your estimations. As I keep saying Context is Everything
If you spend 30 hours designing something and then 60 building it, you will in the end have a vastly more efficient system than if you spend 2 hours designing it (or not even) and then 30 building it. The reason is because the better system is going to save you time.
I can't see the reasoning behind that. Perhaps if you don't have the experience and have to sweat it all out; perhaps if you are doing things as a GUI. I really can't see it. Is this the way they are teaching at schools and colleges now?
Often software requires an investment, it pays off in the long run. Perhaps "better" is subjective as you have been indicating.
There's no "perhaps" about it. Many people here are quite clear that the software set and environment we call Linux is better FOR THEM than using the software set that we call Windows. You can call it a religion, but most people on each side simply don't want to get into a religious war or a pizzing match. They have something that works FOR THEM and that OK. Often the investment is not in the specific software, as many people selling courses on each upgrade of MS-Office would have you believe, but in attitude and technique.
In that case a more objective statement would be that in general the more time you spend in advance, the more time you will save later on.
As a more general statement that is a good one, but it should not be applied in the way you are trying to apply it here. I've often quoted http://www.zdnet.com/article/why-many-mcses-wont-learn-linux/ This isn't about right or wrong. Its about different styles and attitudes. Just as there are different attitudes towards sports, mathematics and science in North America, Europe, Russia and China, that are often difficult to express. The time I've spend in my youth learning how UNIX works, pouring over the manuals, experimenting, working my way through the examples in the White books, has paid off, yes. it means I can quickly make design decisions. I don't need the 30 hours/60hour. I can quickly eliminate many of the less functional design decisions, the time that would be wasted on trying out or coding 'dead ends' and focusing on getting results that can later be refined. This 'first to market' approach is a valuable skill. While you might see it as fail, that is because you don't have the experience that I'm talking about to avoid the things that would be a !FAIL! in the first product. Benz, the Wright brothers, the Dodge bothers and many more were not neophytes; they were experienced engineers when they built their first models.
But the more people work together, or products are created that follow designs that are more elegant but require more time to build, the more in the end time is used more efficiently and also more effectively across the board.
Again, you come close to the mark. The issue when you are doing work with a team is that it is how they are managed that determined how effective they are. Having a team of geniuses is more problematic than having a team of mid-levels who can be kept focused together on the one objective and keep the scope constrained. The geniuses are like a herd of cats, each with their own brilliant ideas, each trying to go their own way. Its why campaniles like Microsoft and IBM and others can Get Things Done on large scales. There is nothing new about this: The Roman legions were effective against hoards of powerful barbarians because of organization and discipline; the British armies in Africa and India ditto. The British "Raj" was actually a very small group but managed a huge country. Its not about right or wrong; its about being effective. You may say that the attrition in this cases was wasteful and hence not efficient but that depends on your POV and objectives.
This is then what we call "good design" or "better design".
Good and better are subjective and context dependent.
But if these solutions do not exist and you need something working in 2 days (or less), then obviously it won't be "better" to start designing a long-term system.
And once again, we come to the 'use once and throw away' capability that the UNIX/Linux CLI permits, something that you simply don't have with a GUI based system. Another property that emerges from the White books is the component-ware attitude of pipes and filters, once again a CLI issue. Many of these one-liners can be put in a file and that file made executable. As far as the system is concerned they execute just like anything else. The hash-bang convention allows for any scripting language. So the executable can be called and used with pies and filters and cron and other automation tools. UNIX/Linux is easily extensible. Once upon a time in the V6/V7 and early BSD days, UNIX fit on a 5 megabyte RLO1 or perhaps a 10M RLO2. I must admit that i had a liking for the smaller capacity more 'flying saucer' http://www.pdp8.net/rk05/pics/small/pack.jpg that was a precursor, perhaps, to the Millennium falcon :-) This limited disk space meant that many of the utilities we no have as binaries were implemented as scripts. They took up less space, even with the old V7 file system, which wasn't nearly as space efficient as more modern file systems. They made the system viable. You can argue that the binaries are faster. Its a bit of a weak argument since the is the load a go overhead, whereas with the script the shell was already loaded. You can certainly argue that replacing the script with a binary requires more hours of thought in design and hours of coding. But when do you get to the turnover point where that time is actually paid back by the "more efficient' execution of a binary rather than a script? *IF* you take a business view, of you are in the closed source business, then having a binary rather than a script makes sense. I met that with some UNIX applications in the SCO days. The Progress Database system was a scriptable set-up but the scripts could be complied, which was faster. Some vendors supplied pre-compiled packages with no source but a useful or at least usable set of documentation of the "hooks". From a business POV this was more 'efficient' than giving away the source, they reasoned. However it also mean that they had to offer a level of technical support that giving away the source might have obliviated. The context here is difficult to determine; perhaps a conflict between the older attitudes towards towards software and the "open" attitudes of UNIX.
I mean, I agree with that Anton. The reason I do so much scripting myself is because I sense and see that the ready made solutions are often stuff or things I disagree with and that take time learning.
You have that freedom with Linux. Its a freedom that the greybeards grew up with, and that has driven a slice of software (and hardware) development for more than half a century. Its why UNIX and later Linux has been a prototyping and development platform for so many things in that time.
Better then to use the 'primitive' tools that I already know, and I can whip up something myself faster than it takes to learn the tools I disagree with.
You have that freedom. Its a stage many of us go though. David Rankin, for example, has a huge library of shell aliases. Talk to him about that approach.
And the benefit of that is that: a) I get something that is perfect onto me,
As in it meets your specific context. That is important because Context is Everything
But that doesn't really negates, but rather informs or reinforces the fact that proper thinking in advance of construction may yield rewards later on.
I doubt very much that the designers of the DC9, those classic Mercedes and Range Rovers in use three quarters of a century later, maintained by skilled and imaginative "natives" with primitive tool shops and parts, ever thought about that. They designed to solve an immediate problem, often cheap and easy mass production. perhaps the success of those mechanisms was the sheer *LACK* of thought that went into the design!
Just think of any computer game. You can't design or build a computer game in incremental steps. It's gotta have ONE release. It needs to be thought out in advance, completely designed, and then built. There is no time or even a reason to do it in any other way.
Contrariwise I've seen the original text based 'Adventure' that I played on a PDP-11 in the 1970s, Written in FORTRAN, later converted to C to make it more maintainable, evolve step by step into a more sophisticated model with 'plug-in' mazes; later to have a "maze creation language"; then evolve into an on-line version and eventually to have a GUI, well first block/icon graphics interface on the old PC. If you've missed how some games evolve they you've missed out on a wonder!
its no a poor design, its either adequate or not.
I keep saying Context is Everything and it is. My context is not yours.
If "poor" is subjective and relates to good/bad/better/worse, then perhaps you might be right. But the word is used in a sense of an implicit context. It is assumed (and rightly so, I believe) that the needs of many users converge to a single point.
Yes, that's an assumption that large monolithic companies like Microsoft, GM and more make. They focus on a single product line. Once, Chrysler tied out having options. Someone calculated that the option package would allow for more variations that there were people lining in the USA at that time. if you look at the road, there is not much variation in the cars you see; they fall into a number of modalities where the overall shape is similar; the variations in body colour are few and are often in the outliers. And this is a capitalist country! Turn the clock back and look at the variation in products in the old communist regimes of Russia and China before they were broken out of their State Socialist Economies and forced to trade with the world. Even fewer variations in products like cars and other consumer goods. If you are right about the 'convergence' than its frightening.
Given then, a common interest, or a shared need across many people, we can call one design poor and the other good.
All you are doing here is agreeing with me Context is Everything
But "poor" is less subjective than "good". Poor also means the opposite of rich.
Its not "poor", its "inappropriate".
Perhaps a car that can only run for 10 miles is adequate to your needs (or at least you think it is) but that doesn't define it as "rich".
Disagree. There's an old Harry Harrison short story from the 1970s, the first oil crisis. The basic theme is this family that appears "rich" because they have a big car that goes "vroom vroom" when everyone else cannot afford the price of gas. Then, behind the scene, you learn that the car is clockwork driven and it takes all week to wind it up for that one trip to the grocery store where they impress that neighbours. Yes, the car would only run for 10 miles on clockwork. The clockwork powered the recording of the "vroom, vroom". The family appeared "rich" to their neighbours. The car did the job for which it was intended.
Richness is then a measure of complexity and feature-abundance.
No. You've defined richness that way. But we're getting into philosophy and epistemology here which is so incredibly OT that we should cease.
If you always design incrementally, you may run into a necessity of constantly redesigning your system because you didn't realise beforehand or take into account higher level needs.
Not so. The basic Apache server was intended to be designed, implemented incrementally in a very pick-and-choose manner. It was about "Deferred Design". Of all I can think of, web services are probably the most spectacular example of DD around. The original design didn't take into account PHP, Rails, Struts or a host of other things.
A richer design is also a simpler design, and because of its simplicity it can do more.
Be careful here, you are shifting your definition. Simpler means less features. There are many simple designs that are not extensible the way Apache is.
It has less obstacles or hindrances embedded in its design.
To do that it must have fewer features. q.v.above.
You can say that car that can do a 10 miles is adequate to your needs (at least today) but you might have been mistaken about that and realize that the next day you need it to be able to do 15 miles, and now you have a problem because you need to redesign and rebuild the thing, whereas if you had been less indigent you would be enjoying the fruits of a richer labour.
Yes, if the assumption that the car is for travel rather than impressing the neighbours.
This is what Linda calls poor and rich design, if I may be so bold. It is really a quite general assumption or idea. I don't think many people would disagree with it.
I'm disagreeing mostly because it is inadequately stated.
Take care that you are not limiting yourself in what you want to do because or just because you feel your current system *should* be good enough for you. This DVD system... one day you may start feeling annoyed by it but you feel it "should" be adequate. 5GB file systems is also a pretty big concession (or even sacrifice, if I may be so) to make. You have basically designed your filesystem/computer setup/structure around the /SIZE of DVDs/. That is VERY odd thing to do. That constitutes to me as "poor" and that is not something subjective.
It is subjective simply because it fits the work-flow I have on my home system. It is effective because it makes backups easy; they get done, which is more than i can say about most people I know an their home systems! Work is another matter. Work, variously, has carousels of tapes, big SAN. Effectively unlimited 'cloud' bandwidth. Regular budget. Regulatory constraints and demands. Paying customers. Contractual obligations. That's the work context. I have none of those. My home machine is effectively a 'hobby' machine. That makes it easy to design around those constraints. I never take more than 5G on any one photo project. If I factor out projects, I see that I never have more than 5G of random photos in any one year. My music I organize primarily by genre and origin, and that allows a similar constraint. Email is IMAP and stored on my ISP and they give me effectively unlimited mail archive space. The overpowering "good" of this design is that it allows easy and regular backups. I think if I polled most of the people I know and asked them how often, ho easily they did backups of their home system I'd get quite a few embarrassed responses. Its why Apple, Amazon, Dropbox and so forth are making backup of portable devices so easy *AND* *TRANSPARENT*. Most people don't think about backups, don't see it as part of their work-flow. So they simply don't do backups. It has to be done for them. Or they don't do it at all. If you say designing a home system so that backups is an easy and convenient part of the work-flow is not "good" then I think you are naive and foolish. I get them done because they are easy. I think this is a Good Thing(tm). hence my design is Good For Me. Context is Everything It is relative to other
things that are "less poor" but it is still objective even if it is not absolute. In the absolute, everything is all thing. In the absolute, a thing is both poor AND rich. But in relative terms, to me this comes across as 'poor'. Because it requires you to make so much concessions.
So just to be quick I will say that context determines good/bad but it does not determine poor/rich. Relation to other designs determines poor/rich. Adequate is related to needs. Needs may change over time. You may also lie to yourself about your needs and accept a lesser solution than you could have had (based on your experience and what is possible to experience). A poor solution may also keep you stuck in a situation where progressively, or increasingly, your needs are not or not longer met. But it can also mean that you have a reason to deny that your solution is poor.
It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for.
There you go again, assuming that 'efficient' means 'speed'.
Actually she is saying or appears to be saying that "speed" causes "inefficiency" to be less noticable. That does not mean speed and efficiency are the same thing, although in this case efficiency IS related to the time it costs you.
-- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 09/27/2015 03:53 AM, Linda Walsh wrote:
Same thing with 'config' files. Unix is till using .rc/.config files, but MS switched to putting all config files for a system or a user in 1 place -- a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
It depends on how you define 'efficient': The Windows registry represents a SPOF; it has no documentation. Its all-in-one nature means you can't address 'just one thing' very easily. Its location means that the rootFS can't be made RO for embedded/ROM'd systems. Along the way, UNIX has implemented performance improvements while maintaining clarity. Back in the early 1980s, there was the VAX/VMS vs Berkeley UNIX4.2 battle. Dave Cutler on the one hand and Bill Joy on the other. Cutler had a very 'optimized' OS: VMS used different file types each optimized, some indexed. Many of its internal mechanisms used databases. UNIX has plain linear files, arrays of bytes. But no matter what optimizations Cutler did at the machine code level, Joy matched him by using a high level language. Simple design principles, simple architecture won out. Yes, UNIX used many design strategies rather than code optimization. Eventually Cutler was forced to admit Joy was right to use C. You simply can't generalise about UNIX/Linux when it has seem so many variations and enhancements. But it always starts with simple algorithms until a clear need for something better comes along. Joy's way of address /etc/passwd was one example. Having a simple text file, searched linearly works for a small number of users. The overhead of a "database" management system doesn't make sense for a single user system (as most workstations are) with a few background processes. It doesn't make sense for a small department, perhaps under 50 users. For a school, college, with hundreds, perhaps thousands of entries, that's another matter. Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent. This is not to deny that eventually you need a more conventional database, and may need to share the IAM (identity access management) mechanism across many platforms. But that's a different problem. We've also seen some simple speed-ups. Inode caching, pathname caching and more. Something like name<=>uid mapping is a perfect example of caching. One thing that makes 'efficiency' considerations under UNIX/Linux different from Windows is where the code executes. UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host. With Windows in a similar environment, its the other way round. The terminal is the client, the host the server. The server supplies the code and its executed on the not-so-dumb terminal. Now along comes the VDI model, which superficially appears to be more like the traditional UNIX way; the terminal are 'thin clients', display engines, and the code executes as a Windows Virtual machine on the host. But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared. The Linux Terminal Sever project http://www.ltsp.org/ https://en.wikipedia.org/wiki/Linux_Terminal_Server_Project follows this old model. Its much simpler than the VDI/"virtual machine per user" model. More efficient and more effective. I've been to presentations given by CISCO and and accelerated storage vendors and been amazed at how they do their cost accounting and how they need incredibly powerful and memory intense hosts for this, and amazingly fast networks. If I didn't know better I'd think it was there solely to sell-sell-sell more and more expensive equipment. What is deceiving about the LTP approach is that 'terminals' are old PC. They may only be running as X-terminals but we have so much computing power going spare we get a mistaken impression that 'dumb terminals' are really computers. Is it efficient? I have a friend that runs a LTP variant at his office with just ten stations out of a Dell tower with just 8Meg of memory and dual core processor. The network is a second hand D-link switch. For the most part its office processing; yes OO/LO modules share a lot of common code, but there's also Thunderbird and Firefox and Google calendar. He talks abut Google a lot since that means mobile/remote and that means salesmen in the field. But I showed him FF, AquaMail and other stuff on my tablet. GoogleDocs is great, but can he run it on his server? Running in the googleplex may not be as 'efficient' as completely 'local' but its seductively convenient and hence effective. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
27.09.2015 17:38, Anton Aylward пишет:
The Windows registry represents a SPOF; it has no documentation.
Will you pay me a dollar for every dot-file in my home directory that is not documented? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 27 Sep 2015, Anton Aylward wrote:
It depends on how you define 'efficient':
The Windows registry represents a SPOF; it has no documentation.
My experience dictates that using the registry was never hard. When you access it is usually based on some tutorial and then the stuff you want is easy to find. These tutorials (mini snippets) of which there are really thousands upon thousands for every possible use case could constitute as "documentation". I certainly don't understand everything about it. But it wasn't that difficult either. I don't like the nature of the centralized thing because as a monolithic thing, it can break in its entirety, like you say.
Its all-in-one nature means you can't address 'just one thing' very easily.
That's not really true, but you can't have any data safety based on that concept the way it is done. How are you going to back it up? There are commands for that but it is not obvious. The general sense is that if it gets corrupt, it gets corrupt as a whole, as you indicate (SPOF).
Its location means that the rootFS can't be made RO for embedded/ROM'd systems.
Not sure whether that is a problem and it could probably be designed around. I don't think Windows/MS runs into problems like that.
But no matter what optimizations Cutler did at the machine code level, Joy matched him by using a high level language. Simple design principles, simple architecture won out.
It's really funny how you call C high level :).
You simply can't generalise about UNIX/Linux when it has seem so many variations and enhancements. But it always starts with simple algorithms until a clear need for something better comes along.
I think you have a bit of a romanticised or embellished view of the (history) of Unix.
Joy's way of address /etc/passwd was one example. Having a simple text file, searched linearly works for a small number of users. The overhead of a "database" management system doesn't make sense for a single user system (as most workstations are) with a few background processes. It doesn't make sense for a small department, perhaps under 50 users.
Depends on how much overhead that database is. I think a simple DB is really no overhead at all. I mean how hard is it to create a simple fixed-size record scheme. When programming with a struct-language like C or pascal, defining structs and writing them or reading them from disk is really peanuts. But it is generally just rather friendly in Linux/Unix that you can use command line tools to operate on them (the files) which is which I don't like journalctl (SystemD) at all. But it is also mostly because the tools to operate on "database storage" when it would exist, would or are also not adequate in Linux. Good user interfaces often go missing. The number of good nCurses applications that use menu and colour, I can count on one hand (that I know of). So if you can't rely on your text-manipulation tools (that are the only power of Linux, so to speak) you are pretty much left without. But seriously a database doesn't have to be MySQL and not even SQLite. If you seriously had good tools (preferably graphically) that were user friendly, easy to use and remember, and preferably according to some standard in the Unix world, having a database really wouldn't matter. In fact, a plain text file is in fact already a database. The more you add record structure to it which eases automation, the more it turns into a DB. But editing a text file by hand is just much easier than remembering command line commands / tools and their syntax to operate on something more complex. I wouldn't mind a binary format for that though and I even wouldn't mind a binary format for logs, it's just that the access methods must be nice and Unix friendly. What it comes down to that if you had binary files for several important features of a system, you would need an ncurses app with menu structure and good interface to edit those files.
For a school, college, with hundreds, perhaps thousands of entries, that's another matter.
I don't think a system with few users would really suffer from any performance overhead if the number of users was so small ;-). But I mean. I'm just thinking of Java. Manipulating collections and classes in Java is pretty much a given. You don't need anything special to store something in an in-memory object database. Many classes perform this function. It's just that, how are you going to write it out to disk (if not serialisation).
Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent.
It's what I said: not beauty of design but speed of implementation.
UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host.
This is actually a very weird system. Even if computers display stuff, they should still be called clients. An X-server running on the client, and clients running on the place that hosts the applications, is very counter-intuitive.
But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared.
Which is a liability and a danger. The result of which is that the entire system must be congruent with itself, and it introduces that SPOF that you talked about. If a variety of different applications all have to use the same library collection, it becomes nearly impossible to do anything fancy because you constantly have to ensure that nothing breaks something else. Which results in the Unix/Linux package systems (I only have Linux experience). And it is a LOT OF WORK to maintain a package system and I think it is a great waste of time, as well as all the countless small updates that just introduce the smallest of new versions, minor minor versions being updated and requiring a system update. This whole release early release often just stinks. It means you are forever stuck in a development process that never completes. There is never a finished product. There is never a real release. A real release should be able to deal with new requirements by relying on its chosen library subset. It should not have a need to be constantly updated for feature or whatever improvements. Only bug fixes and the like. And even that should be taken care of in advance, not after the fact as it is now. This model supports unfinished products that will get their bugs fixed by users after the thing has already been released. And so you get this constant stream of unneccesary updates. In a commercial application you can just install it and it will work fine for a number of years without issue and without the need to constantly patch. It is a lot less work in the long run. This Linux development model is just a very bad use of time. It seems to save time in the short run (by involving users etc.) but in the end it is really a very bad and inefficient and also ineffective use of time in the real long run, that actually matters. Also allowing for multiple versions of libraries (except for some core set that follows the "release once and then keep it stable" model) means your applications can ensure their own consistency and will be able to ensure they can run on pretty much any system, regardless of "package manager perfection". So the sharing everything approach is really just a weakness and not a strength and you don't even need it for the client-server architecture you espouse. Or promote. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Not to ignore, preclude or pre-empt anything esteemed colleagues Andrei or Xen have said, but ... Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote:
Same thing with 'config' files. Unix is till using .rc/.config files, but MS switched to putting all config files for a system or a user in 1 place -- a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
It depends on how you define 'efficient':
The Windows registry represents a SPOF; it has no documentation.
Wrong and wrong. 1: it is spread out into about 10 different sub-files with user-specific data stored/user. The physical structure is full documented. The usage of the registry by 3rd parties is as documented as the usage of linux files are by every 3rd party app (enough said there!)
Its all-in-one nature means you can't address 'just one thing' very easily.
--- You ignore my statements of "it not being perfect", and it could still be improved upon.
Its location means that the rootFS can't be made RO for embedded/ROM'd systems.
The location isn't hard coded as part of the format. User-specific registries stored in user-profiles that can be anywhere is proof of that. Deciding where to put something is not part of the underlying registry specification.
Eventually Cutler was forced to admit Joy was right to use C.
Just like MS's uses for their registry.
For a school, college, with hundreds, perhaps thousands of entries, that's another matter. Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent.
This is not to deny that eventually you need a more conventional database, and may need to share the IAM (identity access management) mechanism across many platforms. But that's a different problem.
---- The registry's backend is not fixed. Separating hives, using 'layers' in the registry, and adding of security (after the fact), and per-user redirection (ala pam's latest per-user instantiations for multi-level security) are all things that were in the registry from Vista on.
We've also seen some simple speed-ups. Inode caching, pathname caching and more. Something like name<=>uid mapping is a perfect example of caching.
Which the registry has -- where it is needed -- in some cases, for OS-components, it's worth it to keep the entire thing in memory with an on-disk journal for recovery -- just like linux's most modern file systems.
One thing that makes 'efficiency' considerations under UNIX/Linux different from Windows is where the code executes.
UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host.
Windows has similar with remote desktops...and has had such since before 2000 -- remember win started 12-15 years after linux.
With Windows in a similar environment, its the other way round. The terminal is the client, the host the server. The server supplies the code and its executed on the not-so-dumb terminal.
That's only 1 config. Remote desktops solutions being the extreme to provide all computing on the server, vs. using mobile user profiles to allow for local caching of frequently used data. MS has an array of levels of centrality vs. spread-out design based on customer needs.
Now along comes the VDI model, which superficially appears to be more like the traditional UNIX way; the terminal are 'thin clients', display engines, and the code executes as a Windows Virtual machine on the host.
That's only a latest (and not necessarily the most efficient or best) to provide another option. All of these models are still provided in MS. But in linux? Show me a major distro that supports X-terminals out of the box & full virt on a server w/dumb client, OUT-of-the-box in linux.
But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared. The Linux Terminal Sever project http://www.ltsp.org/ https://en.wikipedia.org/wiki/Linux_Terminal_Server_Project follows this old model. Its much simpler than the VDI/"virtual machine per user" model. More efficient and more effective.
Now you are defining a specific use of the words 'efficient and 'effective' -- with no point, as MS suports both.
I've been to presentations given by CISCO and and accelerated storage vendors and been amazed at how they do their cost accounting and how they need incredibly powerful and memory intense hosts for this, and amazingly fast networks. If I didn't know better I'd think it was there solely to sell-sell-sell more and more expensive equipment.
Vs. the current linux fad of forcing everyone into cisternd. Linux doesn't supply options or previous compat, MS -- you can still do data storage on FAT file systems.
What is deceiving about the LTP approach is that 'terminals' are old PC. They may only be running as X-terminals but we have so much computing power going spare we get a mistaken impression that 'dumb terminals' are really computers.
Is it efficient? I have a friend that runs a LTP variant at his office with just ten stations out of a Dell tower with just 8Meg of memory and dual core processor. The network is a second hand D-link switch. For the most part its office processing; yes OO/LO modules share a lot of common code, but there's also Thunderbird and Firefox and Google calendar. He talks abut Google a lot since that means mobile/remote and that means salesmen in the field. But I showed him FF, AquaMail and other stuff on my tablet. GoogleDocs is great, but can he run it on his server?
MS is deprecating some of the old methods to move (as is linux) to a pay-as-you-go plan. But then Gates is no longer at the help. Balmer was all about increasing busines profits, and ignoring the user. Gates tried for more balance. Balmer and lackies = Poettering.
Running in the googleplex may not be as 'efficient' as completely 'local' but its seductively convenient and hence effective.
And how was that different than DCOM/remote RPC? Restful/ajax/googleplex == redesign of MS's distributed computing 15 years later (and MS's design was based on Unix corba which never got off the ground). Until the 'everything in the cloud' movement took off because everyone saw that monthly-service fees were the way to long term profitability with local PC's reduced to game-consoles (locked in by TPM/trusted execution and secure boot) are the same as Ms's Palladium suggested 15 years ago), MS supported ALL of the previous paradigms -- which linux has never done. P.S. I'm not a MS-supporter, I hate MS in so many ways, but linux is moving to the worst of MS's ideas with no choice. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (12)
-
Andrei Borzenkov
-
Anton Aylward
-
Carlos E. R.
-
Carlos E. R.
-
David C. Rankin
-
Felix Miata
-
Greg Freemyer
-
greg.freemyer@gmail.com
-
jdd
-
Lew Wolfgang
-
Linda Walsh
-
Xen