[opensuse] gparted, btrfs, and xfs
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space. gparted was recommended to me, and I have taken the live ISO and “burned” it onto a USB memory stick. I’m having problems booting my laptop with it (endless cycle of “rebooting in 30s”), but I think I can get around that with the latest pre-release of the ISO. In any case, what I would like to do is think /home (on an XFS partition) and use the extra space to grow the / file systems (btrfs). IS this even possible. I did not find any documentation on who to do this, but that’s probably because my search terms were not refined enough. Any help would be appreciated. Thanks, TomN�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
Did you check btrfs filesystem usage / and snapper list / ? Sometimes you are able to free up space by deleting snapshots and running balance. -------- Original Message -------- From: Tom Kacvinsky <Tom.Kacvinsky@suse.com> Sent: March 23, 2016 9:05:09 AM EDT To: "opensuse@opensuse.org" <opensuse@opensuse.org> Subject: [opensuse] gparted, btrfs, and xfs I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space. gparted was recommended to me, and I have taken the live ISO and “burned” it onto a USB memory stick. I’m having problems booting my laptop with it (endless cycle of “rebooting in 30s”), but I think I can get around that with the latest pre-release of the ISO. In any case, what I would like to do is think /home (on an XFS partition) and use the extra space to grow the / file systems (btrfs). IS this even possible. I did not find any documentation on who to do this, but that’s probably because my search terms were not refined enough. Any help would be appreciated. Thanks, TomN�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz�� -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/23/2016 09:05 AM, Tom Kacvinsky wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
gparted was recommended to me, and I have taken the live ISO and “burned” it onto a USB memory stick. I’m having problems booting my laptop with it (endless cycle of “rebooting in 30s”), but I think I can get around that with the latest pre-release of the ISO.
In any case, what I would like to do is think /home (on an XFS partition) and use the extra space to grow the / file systems (btrfs). IS this even possible. I did not find any documentation on who to do this, but that’s probably because my search terms were not refined enough.
Any help would be appreciated.
Yes, anything is possible ... BUT TAKE BACKUPS FIRST ! I got sick and tired of this "provisioning' problem; its not new, I deal with it back in the old UNIX days, long before Minix, never mind Linux. Somewhere around the advent of the 500M disk for home computing I started to make use of the Veritas manager, which I'd been using a form of on the Big Iron UNIX such as AIX. We know that as "LVM" today. It slides a layer of indirection under the file system so that the partition boundaries can be altered dynamically. Some file systems, such as ReiserFS, which I favour, can be grown and shrunk while running. When I first had to convert to LVM I did it this way: The machine had the ability to support more than one hard drive, so i plugged the 'new' drive in. I partitioned it with a space for /boot and for SWAP. Having those on 'real' partitions makes some kinds of debugging easier. The rest of the disk I devoted to LVM. I use the YAST partitioner, which is pretty smart, to create LVs in that LVM for the file systems I wanted, some slop space and some space left unsigned. Using rescue mode I mounted both the old drive and the new, with LVM active, and rsync'd across what I wanted. The comes the tricky part. I had to chroot to the new drive, mount everything there and do the mkinitrd with everything on the new drive. Oh, and make a new MBR. I didn't get it right the first time. To be perfectly honest there's a LOT I don't get right the first time. THAT'S WHY I MAKE BACKUPS! That was a long time ago. Now I am of the opinion that I can't live without the flexibility that LVM gives me, not only in regard to resizing, but in the ability to deal with multiple spindles in an arbitrary way, do some but not all mirroring, some but not all striping, try out other file systems and thrwo them away. I'm not saying you couldn't do this with hard partitioning and lots of free disk space and time, but I can do it without ever needing to reboot, all quite casually. The only time I'd run into your situation was if I ran out of absolute disk space. With terabyte drives that's not likely now. And, more to the point, if I did on the drives I had, then i just add another spindle, put a LVM volume on it and extend it as part of the group and expand onto that. Of course if you have the BtrFS "one file system to rule them all" approach, where BtrFS has taken over all of your spindle, /boot and /home and everything, then the same thing can apply, but then you wouldn't be in your present situation. Personally I think LVM offers more flexibility. But then I'm a bit of an experimenter. When Linda made a good case for XFS I tried that and eventually made use of it for my Videos. I can set up ext4 with different characteristics 'side-by-side' and compare them, then tear it all down. And do this all without rebooting. If a file system claims to be able to expand and contract, I can experiment with that, again without rebooting. This long weekend I plan to try converting my RootFS from BtrFS to ext4FS, again using a LV created just for that purpose, rsync'ing across. That will require the kind of chroot/mkinitrd and a reboot. Of course being a LV, I can switch back :-) But I will make backups, just in case :-) ALWAYS MAKE BACKUPS! -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case. I'm hopeful there's been a change from 13.2's layout+snapper policy to do more aggressive clean ups or less aggressive snapshotting. I can hardly think of a general purpose use case where needing more than two rollbacks are really necessary. So OK, throw a bunch more rollbacks at it and make it 5. That's not a lot of trees but snapper takes piles of snapshots by default. I think you're better off looking at snapper configuration and getting it to clean up all these snapshots you'll never rollback to. Otherwise, you're stuck having to migrate /home elsewhere because XFS does not support shrink at all. You'll have to move the data off /home, wipe /home entirely, repartition, reformat, and copy data back to home, and then you can live resize Btrfs. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mar 23, 2016, at 13:55, Chris Murphy <lists@colorremedies.com> wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case. I'm hopeful there's been a change from 13.2's layout+snapper policy to do more aggressive clean ups or less aggressive snapshotting. I can hardly think of a general purpose use case where needing more than two rollbacks are really necessary. So OK, throw a bunch more rollbacks at it and make it 5. That's not a lot of trees but snapper takes piles of snapshots by default.
I think you're better off looking at snapper configuration and getting it to clean up all these snapshots you'll never rollback to.
Otherwise, you're stuck having to migrate /home elsewhere because XFS does not support shrink at all. You'll have to move the data off /home, wipe /home entirely, repartition, reformat, and copy data back to home, and then you can live resize Btrfs.
Thanks. I’ll dig into snapper. Never worked with btrfs before, so it will take a while. Hopefully there is a good online resource.
On Mar 23, 2016, at 14:09, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
On Mar 23, 2016, at 13:55, Chris Murphy <lists@colorremedies.com> wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case. I'm hopeful there's been a change from 13.2's layout+snapper policy to do more aggressive clean ups or less aggressive snapshotting. I can hardly think of a general purpose use case where needing more than two rollbacks are really necessary. So OK, throw a bunch more rollbacks at it and make it 5. That's not a lot of trees but snapper takes piles of snapshots by default.
I think you're better off looking at snapper configuration and getting it to clean up all these snapshots you'll never rollback to.
Otherwise, you're stuck having to migrate /home elsewhere because XFS does not support shrink at all. You'll have to move the data off /home, wipe /home entirely, repartition, reformat, and copy data back to home, and then you can live resize Btrfs.
Thanks. I’ll dig into snapper. Never worked with btrfs before, so it will take a while. Hopefully there is a good online resource.
OK, snapper is pretty easy to figure out. I had only one snapshot, which I deleted, but it did not free up enough space. :-( I’m looking for other stuff to delete. I am thinking the default I took for /, 10GB, is not nearly enough.N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
On 03/23/2016 02:49 PM, Tom Kacvinsky wrote:
On Mar 23, 2016, at 14:09, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
On Mar 23, 2016, at 13:55, Chris Murphy <lists@colorremedies.com> wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case. I'm hopeful there's been a change from 13.2's layout+snapper policy to do more aggressive clean ups or less aggressive snapshotting. I can hardly think of a general purpose use case where needing more than two rollbacks are really necessary. So OK, throw a bunch more rollbacks at it and make it 5. That's not a lot of trees but snapper takes piles of snapshots by default.
I think you're better off looking at snapper configuration and getting it to clean up all these snapshots you'll never rollback to.
Otherwise, you're stuck having to migrate /home elsewhere because XFS does not support shrink at all. You'll have to move the data off /home, wipe /home entirely, repartition, reformat, and copy data back to home, and then you can live resize Btrfs.
Thanks. I’ll dig into snapper. Never worked with btrfs before, so it will take a while. Hopefully there is a good online resource.
OK, snapper is pretty easy to figure out. I had only one snapshot, which I deleted, but it did not free up enough space. :-(
I’m looking for other stuff to delete. I am thinking the default I took for /, 10GB, is not nearly enough.N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz�
Yes 10GB might not be enough but whats the output from btrfs fi usage / ? You can also try running a balance (btrfs balance start / -dlimit=3) and see if that helps. -- Regards, Uzair Shamim -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mar 23, 2016, at 14:54, Uzair Shamim <uzashamim@gmail.com> wrote:
On 03/23/2016 02:49 PM, Tom Kacvinsky wrote:
On Mar 23, 2016, at 14:09, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
On Mar 23, 2016, at 13:55, Chris Murphy <lists@colorremedies.com> wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case. I'm hopeful there's been a change from 13.2's layout+snapper policy to do more aggressive clean ups or less aggressive snapshotting. I can hardly think of a general purpose use case where needing more than two rollbacks are really necessary. So OK, throw a bunch more rollbacks at it and make it 5. That's not a lot of trees but snapper takes piles of snapshots by default.
I think you're better off looking at snapper configuration and getting it to clean up all these snapshots you'll never rollback to.
Otherwise, you're stuck having to migrate /home elsewhere because XFS does not support shrink at all. You'll have to move the data off /home, wipe /home entirely, repartition, reformat, and copy data back to home, and then you can live resize Btrfs.
Thanks. I’ll dig into snapper. Never worked with btrfs before, so it will take a while. Hopefully there is a good online resource.
OK, snapper is pretty easy to figure out. I had only one snapshot, which I deleted, but it did not free up enough space. :-(
I’m looking for other stuff to delete. I am thinking the default I took for /, 10GB, is not nearly enough.
Yes 10GB might not be enough but whats the output from btrfs fi usage / ? You can also try running a balance (btrfs balance start / -dlimit=3) and see if that helps.
So out of space that not even btrfs balance will run. :-(
On Wed, Mar 23, 2016 at 1:52 PM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
So out of space that not even btrfs balance will run. :-(
Try: btrfs balance start -dusage=5 -musage=5 / btrfs fi df / And report the results. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
??----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, 2016-03-23 at 18:49 -0000, Tom Kacvinsky wrote:
I’m looking for other stuff to delete. I am thinking the default I took for /, 10GB, is not nearly enough.N?????r??y隊Z)z{.?ﮞ˛???m??z{.????:?{Zr??z?'z??j)h????Ǿ? ޮ?^?ˬz?
If you are looking at deleting things, there is another road: Create a directory to hold things under /home. For instance, copy /usr to /home/theusr, bind mount or link it, then delete the original. And disable snapshots, because this move about negates the advantages of them for "/". - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb0R/sACgkQtTMYHG2NR9ULUQCfb4Bl7IdUs1c0uNPcCbchS4Ln 7uQAn0HmfE5F88ihpR/wrXrjKznZGdak =RLm4 -----END PGP SIGNATURE-----
On 03/23/2016 01:55 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case.
I believe the default assigns more than 10GB to the root partition. -- Regards, Uzair Shamim -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mar 23, 2016, at 17:08, Uzair Shamim <uzashamim@gmail.com> wrote:
On 03/23/2016 01:55 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case.
I believe the default assigns more than 10GB to the root partition.
I probably ****ed it up. I did free some space. I had forgotten I installed texlive (I was wanting to type set some mathematics again), so I removed that. I also removed perl because that took close to 2GB, but now my laptop is dead in the water. Bluetooth mouse is not working, external keyboard and laptop’s trackpad are not responsive. But that’s another problem. Only happens after boot, the screen in which you can select how you want to boot, everything works fine. N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
On Wed, 2016-03-23 at 17:08 -0400, Uzair Shamim wrote:
On 03/23/2016 01:55 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case.
I believe the default assigns more than 10GB to the root partition.
I don't know whether it's a default or the OP chose it, but not allowing enough space for the root partition has been a classic mistake since long before Linux existed. Hence Anton and others love of LVM since it could cope. Cheers, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On March 23, 2016 2:30:49 PM PDT, Dave Howorth <dave@howorth.org.uk> wrote:
On Wed, 2016-03-23 at 17:08 -0400, Uzair Shamim wrote:
On 03/23/2016 01:55 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case.
I believe the default assigns more than 10GB to the root partition.
I don't know whether it's a default or the OP chose it, but not allowing enough space for the root partition has been a classic mistake since long before Linux existed. Hence Anton and others love of LVM since it could cope.
Cheers, Dave
Agreed. Especially with KDE 10 gig is not enough for me, and it's not just opensuse that has the default set too low. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Mar 23, 2016 at 3:30 PM, Dave Howorth <dave@howorth.org.uk> wrote:
On Wed, 2016-03-23 at 17:08 -0400, Uzair Shamim wrote:
On 03/23/2016 01:55 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 7:05 AM, Tom Kacvinsky <Tom.Kacvinsky@suse.com> wrote:
I made the fundamental mistake of not giving enough space to the / partition, and now I find I cannot install further updates because I am out of disk space.
I'm skeptical this is your mistake. The defaults shouldn't readily get you into trouble unless you have a workload that's a distinct edge case.
I believe the default assigns more than 10GB to the root partition.
I don't know whether it's a default or the OP chose it, but not allowing enough space for the root partition has been a classic mistake since long before Linux existed. Hence Anton and others love of LVM since it could cope.
LVM is not relevant at all in this case because even with LVM, XFS being used for /home still means it can't be resized, and therefore a tear down would still be required. I don't have Leap or Tumbleweed installation handy at the moment but anyone testing should supply feedback for the developers. It's really not OK to have such a small root with a large /home that also uses a file system that doesn't support shrink. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/23/2016 07:24 PM, Chris Murphy wrote:
LVM is not relevant at all in this case because even with LVM, XFS being used for /home still means it can't be resized, and therefore a tear down would still be required.
I disagree. This is why. Suppose I have /home as XFS in a 20G partition (aka LV) and 'df' tells me I'm only using 10G of that. So I create a new LV, Say I make it 15G. Now I have a few options. If I want to keep using XFS I can mkfs.xfs that LV, rsync the contents of /home across, rename the "HOME" LV to "oldHOME" and rename the new LV to "HOME". Of course in my infinite wisdom and foresight (gained by getting it wrong in the past) I choose to have the entries in /etc/fstab mount by name rather than mount by UUID :-) Another option is to mkfs.reiserfs since ReiserFS can, in the future, be both shrunk and grown, while mounted, without shutting the system down. I'm rather enamoured of that option :-) I have tried NilFS2, which can be shrunk or grown. If you are using a SSD you might look at this. It worked OK when I tied it, and it works for mounted (aka live) file systems, but I don't see a compelling reason to use it on spinning rust other than its continuous snapshotting. That *might* be relevant for /home/ for users who make a lot of mistakes and need to retrieve previous iterations of files. Then, of course, there's ext4FS. This can be resized with resize2fs, and it can both grow and shrink the size of the file system. The downside is that this has to be done off-line, that is with the FS unmounted. (So far as I can tell. If I'm wrong, please correct me and give details.) My conclusion with all this is that XFS is not an auspicious chose if you have the least uncertainty about provisioning. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Mar 23, 2016 at 5:58 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/23/2016 07:24 PM, Chris Murphy wrote:
LVM is not relevant at all in this case because even with LVM, XFS being used for /home still means it can't be resized, and therefore a tear down would still be required.
I disagree. This is why.
Suppose I have /home as XFS in a 20G partition (aka LV) and 'df' tells me I'm only using 10G of that. So I create a new LV, Say I make it 15G.
Now I have a few options. If I want to keep using XFS I can mkfs.xfs that LV, rsync the contents of /home across, rename the "HOME" LV to "oldHOME" and rename the new LV to "HOME".
How do you create an LV when you have no free extents in the VG to create the LV? If you have free extents in the VG to make an LV, then the analogous layout is unpartitioned free space of the same size, in which case you can partition it, use partprobe to update the kernel, and then 'btrfs add' to add the partition to the existing btrfs volume. It's not exactly a fair comparison for you to say LVM has an advantage when there's free space to make an LV, and yet you don't grant the same free space existing in the non-LVM case, is it?
Of course in my infinite wisdom and foresight (gained by getting it wrong in the past) I choose to have the entries in /etc/fstab mount by name rather than mount by UUID :-)
Another option is to mkfs.reiserfs since ReiserFS can, in the future, be both shrunk and grown, while mounted, without shutting the system down. I'm rather enamoured of that option :-)
I have tried NilFS2, which can be shrunk or grown. If you are using a SSD you might look at this. It worked OK when I tied it, and it works for mounted (aka live) file systems, but I don't see a compelling reason to use it on spinning rust other than its continuous snapshotting. That *might* be relevant for /home/ for users who make a lot of mistakes and need to retrieve previous iterations of files.
I don't understand the relevance of this to either what I said, or the thread. If you've created a custom layout that reserves free space for future use, you get the credit for doing that. It has nothing to do with LVM. And I'm even less understanding the relevance of other file systems in the discussion.
Then, of course, there's ext4FS. This can be resized with resize2fs, and it can both grow and shrink the size of the file system.
And that would have been useful, despite being an offline only shrink. The partition can then be changed to fit, a new partition in its place, and 'btrfs add' that extra partition, done. LVM really doesn't make things easier, faster, or better in the context of this thread. If anything it's more complex because now there's a whole set of emacs like things you have to do to get the equivalent of 'btrfs add /dev/sdX': pvcreate vgextend lvcreate mkfs edit fstab mount That's six commands to one. Maybe I'm missing a command in there, that'd actually be funny and help prove my point.
The downside is that this has to be done off-line, that is with the FS unmounted. (So far as I can tell. If I'm wrong, please correct me and give details.)
Yes, ext4 is offline shrink only at the present time.
My conclusion with all this is that XFS is not an auspicious chose if you have the least uncertainty about provisioning.
I think XFS is fine. The problem I have is that given it doesn't support shrink, the default layout should defer more space to root than it does. 10GiB is what I'm hearing and that's pretty small to then also hand over all remaining space to swap and an unshrinkable /home. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Now completely off topic for this thread and how LVM could have helped is if /home were on a thinly provisioned logical volume. In that case you can effectively shrink even an XFS volume by using fstrim, which will cause unused logical extents to be freed back into the thin pool, from which you can create a new thin LV, and then 'btrfs add' that LV to the existing Btrfs volume. But as we're talking about fixed conventional "thick" logical volumes, it is anchored to fact XFS grows but doesn't shrink. So no such bail out. And thin provisioning isn't for the feint of heart as it adds another layer, effectively another LV called a thin pool, in between the VG and the virtual sized LV you actually put a file system on. It's... a bit confusing for while. Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/23/2016 08:46 PM, Chris Murphy wrote:
And thin provisioning isn't for the feint of heart as it adds another layer, effectively another LV called a thin pool, in between the VG and the virtual sized LV you actually put a file system on. It's... a bit confusing for while.
There is that! It of the "you figure it out, then look away, look back and you've lost it" class of confusing. Sort of like quantum mechanics or some forms of statistics. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/03/2016 01:46, Chris Murphy a écrit :
Now completely off topic for this thread and how LVM could have helped is if /home were on a thinly provisioned logical volume.
I didn't read the hole thread, so I beg you pardon if this have already been said. The original question was about solving the "root too small problem", not any /home question. It may be hard to move a root partition, but it's always easy to move a /home, specially if, like me, you use /home only for basic work and use an other partition (or disk) for very large files. It's then easy to move /home to any usb device, delete /home and resize / if it's possible... then recreating a smaller /home jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/23/2016 08:40 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 5:58 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
[snip]
How do you create an LV when you have no free extents in the VG to create the LV?
I've already dealt with question a number of times. So long as there is a SCSI slot or a SATA socket free to plug another spindle in it can never happen. But that's an extreme case in these days of terabyte drives. if you've created a system that said the RootFS is 10G (including /boot) and the rest of your 1T or 2T (or is it 3T these days?) drive is the /home XFS then perhaps you weren't thinking about provisioning when you did the install. I would expect that people as experienced as Thee and Mee would think about how much space to allocate now, even if it is to matters like SWAP if you're doing the "One File System To Rule All Drives" approach with BtrFS. The OP and many here do seem to make use of partitions. I get the impression that people like Carlos and Felix do a LOT of partitions :-) The machine I'm typing at has a 1T drive with separate /boot and SWAP and pvscan tells me PV /dev/sda3 VG vgmain lvm2 [924.06 GiB / 520.06 GiB free] I have a LOTS of 5G LVs (which back up onto DVDs) and a number of 32G LVs (which back up onto USB sticks), the latter containing extensive video, extensive music, and very extensive photographs by year. If I were to add a another spindle to the volume groups that scatter-gather capability of LVM (like that of BtrFS in "Rule them all mode" means I can have a FS that overlaps the drives, something not possible with conventional partitioning. In fact I can grow a number of file systems so that they overlap more than one drive.
If you have free extents in the VG to make an LV, then the analogous layout is unpartitioned free space of the same size, in which case you can partition it, use partprobe to update the kernel, and then 'btrfs add' to add the partition to the existing btrfs volume. It's not exactly a fair comparison for you to say LVM has an advantage when there's free space to make an LV, and yet you don't grant the same free space existing in the non-LVM case, is it?
Not quite. With LVM if there is free space anywhere you can create a LV. Other partitioning methods need continuous (or is it contiguous) space in order to create a file system. I think I made that clear above. OK make it more explicit: I have a drive with file systems that fully fill it. Small case. I have a 30G drive with three 10G file systems. I cannot grow them on that drive. Of that 30G is a LVM VG I can add another drive, include it in the VG, and now I can grow the LVs and hence the file systems - ALL OF THEM. And yes I acknowledge you can do similar with BtrFS. But you can't do that with any file system that need a continuous (or is it contiguous) span of a partition, such as just about every other file system we've touched on. Do we need to get to ZFS? I hope not :-) Yes there are tools that will merrily shuffle the partitions back and forth, claiming, and I hope succeeding, in preserving the file system on the partitions as they get moved. But that takes time. I've done it. I find it scary! With LVM I get immediate results and I don't have to worth about any problems resulting from sliding partitions back and forth. Of course that may not concern you. As they say, "YMMV".
I don't understand the relevance of this to either what I said, or the thread. If you've created a custom layout that reserves free space for future use, you get the credit for doing that. It has nothing to do with LVM. And I'm even less understanding the relevance of other file systems in the discussion.
Not all file systems can shrink and expand. That's the situation, a you pointed out, that the OP is in. XFS can't shrink.
Then, of course, there's ext4FS. This can be resized with resize2fs, and it can both grow and shrink the size of the file system.
And that would have been useful, despite being an offline only shrink. The partition can then be changed to fit, a new partition in its place, and 'btrfs add' that extra partition, done. LVM really doesn't make things easier, faster, or better in the context of this thread. If anything it's more complex because now there's a whole set of emacs like things you have to do to get the equivalent of 'btrfs add /dev/sdX':
pvcreate vgextend lvcreate mkfs edit fstab mount
That's six commands to one. Maybe I'm missing a command in there, that'd actually be funny and help prove my point.
Or maybe not. You're assuming the need to add a new drive rather than just a LV. Or extend the LV and grow the FS (two commands). (Actually its one: fsadm resize <device>) But then in the BtrFS case it is occupying the whole drive so there would be no need to do anything at all. Right. Lets not even think of ZFS. I very particularly don't want to deal with the case of ZFS.
The downside is that this has to be done off-line, that is with the FS unmounted. (So far as I can tell. If I'm wrong, please correct me and give details.)
Yes, ext4 is offline shrink only at the present time.
Yes, late model kernels support growing ext4 while online :-) You do have to grow the 'partition' first, which is easily done online if it is a LV, but another matter if its a regular style partition and that needs reshuffling other partitions using the scary-hairy mode of gparted.
My conclusion with all this is that XFS is not an auspicious chose if you have the least uncertainty about provisioning.
I think XFS is fine. The problem I have is that given it doesn't support shrink,
Which is what the OP was facing.
the default layout should defer more space to root than it does. 10GiB is what I'm hearing and that's pretty small to then also hand over all remaining space to swap and an unshrinkable /home.
I'm not convinced that 10G is the default root unless the installer is faces with a small disk (say 30G to 50G), but yes, any default that is so weak as to make root just sufficient (plus some slop) and the rest to /home is making a mess of provisioning. With terabyte drive the installer should strongly suggest space for things like /var, /opt and /tmp, or at the very least, if the RootFS is to be BtrFS and those to be subvolumes, then recalculate the size of the RootFS to accommodate what those would been as suggested sizes if they were to have been separate. A 10G that included all those simply is not adequate. heck, a 10G without them is not adequate! -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Mar 23, 2016 at 7:32 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/23/2016 08:40 PM, Chris Murphy wrote:
On Wed, Mar 23, 2016 at 5:58 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
[snip]
How do you create an LV when you have no free extents in the VG to create the LV?
I've already dealt with question a number of times. So long as there is a SCSI slot or a SATA socket free to plug another spindle in it can never happen.
OK but that's now an extra drive that's making the advantage, not LVM. If you have an extra drive to add to LVM, and the OP has an extra drive to add to Btrfs, it's the same. LVM doesn't contribute to solving the original posters problem, it's the additional drive that solves it.
But that's an extreme case in these days of terabyte drives. if you've created a system that said the RootFS is 10G (including /boot) and the rest of your 1T or 2T (or is it 3T these days?) drive is the /home XFS then perhaps you weren't thinking about provisioning when you did the install.
Like I said from the start, the default partitioning is suboptimal and the community should ask for it to be fixed. 10GiB root is too small and it's not the user's fault for not knowing that, the default should apply to the general purpose case and not make it this easy for the user to get stuck, especially considering XFS is not shrinkable.
Lets not even think of ZFS. I very particularly don't want to deal with the case of ZFS.
ZFS doesn't shrink either.
I'm not convinced that 10G is the default root unless the installer is faces with a small disk (say 30G to 50G), but yes, any default that is so weak as to make root just sufficient (plus some slop) and the rest to /home is making a mess of provisioning. With terabyte drive the installer should strongly suggest space for things like /var, /opt and /tmp, or at the very least, if the RootFS is to be BtrFS and those to be subvolumes, then recalculate the size of the RootFS to accommodate what those would been as suggested sizes if they were to have been separate. A 10G that included all those simply is not adequate. heck, a 10G without them is not adequate!
Ok so the summary at this point is that we're in pretty much complete agreement. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 12:18 AM, Chris Murphy wrote:
Ok so the summary at this point is that we're in pretty much complete agreement.
On that point of the need to reset the defaults, yes, But the basic difference between us seems to be this: You believe wholeheartedly in BtrFS as the one correct file system. You put forward its benefits well. I no more believe that there should be just one file system than I think there should be just one religion or just one country or just one ethic culture or just one language. I like the differences. people have had different principles and different ideas about things they want to explore. OBTW: I think minimising the number of commands required is just the latest iteration of what was once the idea of minimising the number of keystrokes. Personally I think clarity and openness matter more. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Mar 24, 2016 at 10:17 AM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/24/2016 12:18 AM, Chris Murphy wrote:
Ok so the summary at this point is that we're in pretty much complete agreement.
On that point of the need to reset the defaults, yes,
But the basic difference between us seems to be this:
You believe wholeheartedly in BtrFS as the one correct file system.
I do not. This isn't even an approximation of my belief. I'm use case oriented and tend to defer to the user, as ultimately they're the one who has to manage whatever they create, not me. In my first post on this thread I didn't suggest a Btrfs /home for example. My two recommendations included retaining XFS for /home.
OBTW: I think minimising the number of commands required is just the latest iteration of what was once the idea of minimising the number of keystrokes. Personally I think clarity and openness matter more.
OK but using dmsetup instead of lvm tools is more clarity and openness, but do you do it that way? Even more clear and open is using a hex editor to directly modify the metadata on the hard drive sectors. There's also an attribute of tediousness, is what I'm getting at. And while I recognize why there are separate steps for pvcreate, vgcreate, lvcreate I find it tedious most of the time. There's even pvck, and vgck in addition to filesystem check. I'm a fan of well integrated optional layers that don't get in the user's way when they aren't making use of them. And when they do have a use case for them, they can make the modification easy, safely, and get back to what they were doing which is almost certainly more interesting to them than file system stuff. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 02:07 PM, Chris Murphy wrote:
In my first post on this thread I didn't suggest a Btrfs /home for example. My two recommendations included retaining XFS for /home.
Every analysis I've done of the advantages of BtrFS for myself lead me to think that when its finished, when its done all the deduplication bits and more, it will be of more use for /home than for the RootFS. I do have mainframe experience and do recall the idea of 'rollback' of updates and patches. BTDT. But a few decades of observing corporate use of PC based production systems is that PC are cheaper than mainframes so it was always economically feasible and possible to duplicate a test environment for releases before putting them into production. BTDT with banks, telcos, insurance and more. This framework is established. But end users are another matter. End users make mistakes. End users might delete or overwrite a file and want to get back the previous version. Snappshoting of the user space is a major issue. That's why we have http://snapper.io/manpages/pam_snapper.html I have a similar argument for "shared services", ISPs, that it is better to put user files on a SSD that the core binaries of {/usr/,}/bin and {/usr,}/lib since the shared binaries are going to be loaded once then resident but the real 'churn"' is with the users' application data and they want that loaded fast. Most people seen to disagree and get a glazed look when when I discuss this with them. All to often the responses are of the class "proof by assertion" or "proof by the lemmings principle". -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 11:37 AM, Anton Aylward wrote:
But end users are another matter. End users make mistakes. End users might delete or overwrite a file and want to get back the previous version.
True. I seldom even bother to backup the OS on a machine. Just select directories like /etc and /home, etc. And therefore, those are the directories I shadow with SpiderOak. (I use spideroak in the "backup" mode with rollback capability.) I've only had a few instances of actually needing that capability but I'm perfectly happy to have it on another machine. Snapshots on the same drive always struck me as magical thinking. -- After all is said and done, more is said than done. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 03:06 PM, John Andersen wrote:
Snapshots on the same drive always struck me as magical thinking.
Indeed. The times I need backups most are the times that the dis as a whole is unrecoverable. Snapshots would be of no use. RAID anybody? Well LVM makes mirroring easy. I can mirror LVs rather than the while disk. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Mar 24, 2016 at 2:47 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/24/2016 03:06 PM, John Andersen wrote:
Snapshots on the same drive always struck me as magical thinking.
Indeed. The times I need backups most are the times that the dis as a whole is unrecoverable. Snapshots would be of no use.
RAID anybody? Well LVM makes mirroring easy. I can mirror LVs rather than the while disk.
LVM raid is rather badass from a flexibility perspective. It doesn't offer the feature set of mdadm however so you still have to evaluate what your use case requires. Certainly being able to create an LV that's raid1 and another that's raid5 or even raid6 is very useful if you're regularly having to spin up, grow, and tear town volumes, because doing this with mdadm and then LVM on top is a pain. On the feature list but no code yet for Btrfs is per subvolume raid; conceivably it could be per directory or per file, similar to compression and eventually encryption. There's no format change needed for this, it's just a matter of having a way to associate a redundancy profile with a subvolume, directory or file, which could just be an xattr. And then the allocator puts the file into a chunk with that profile type, which is already how the allocator works. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Mar 24, 2016 at 2:57 PM, Chris Murphy <lists@colorremedies.com> wrote:
On Thu, Mar 24, 2016 at 2:47 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/24/2016 03:06 PM, John Andersen wrote:
Snapshots on the same drive always struck me as magical thinking.
Indeed. The times I need backups most are the times that the dis as a whole is unrecoverable. Snapshots would be of no use.
RAID anybody? Well LVM makes mirroring easy. I can mirror LVs rather than the while disk.
LVM raid is rather badass from a flexibility perspective. It doesn't offer the feature set of mdadm however so you still have to evaluate what your use case requires. Certainly being able to create an LV that's raid1 and another that's raid5 or even raid6 is very useful if you're regularly having to spin up, grow, and tear town volumes, because doing this with mdadm and then LVM on top is a pain.
On the feature list but no code yet for Btrfs is per subvolume raid; conceivably it could be per directory or per file, similar to compression and eventually encryption. There's no format change needed for this, it's just a matter of having a way to associate a redundancy profile with a subvolume, directory or file, which could just be an xattr. And then the allocator puts the file into a chunk with that profile type, which is already how the allocator works.
On the GUI side, Cockpit, the web management UI for Fedora Server, is going to leverage LVM raid soon. And ground level support is in the RH/Fedora OS installer for LVM raid also, so we might see that in the next release or two depending on how tricky the UI/Ux stuff gets. This is a much more top down sort of raid build than with mdadm so the UI will have to account for this. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 04:57 PM, Chris Murphy wrote:
LVM raid is rather badass from a flexibility perspective. It doesn't offer the feature set of mdadm however so you still have to evaluate what your use case requires.
I was thinking of a very simple case that I use where rather than making a snapshot per se, I simply mirror a couple of LVs of my 1T drive onto a smaller drive. Hmm, I could, and I need to experiment with this, "soft" mirror onto a USB stick. Hmmm, LV snapshot, maybe. Hmm. Need to look into this. Hmmmm. Hmmm. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Mar 24, 2016 at 12:37 PM, Anton Aylward <opensuse@antonaylward.com> wrote:
On 03/24/2016 02:07 PM, Chris Murphy wrote:
In my first post on this thread I didn't suggest a Btrfs /home for example. My two recommendations included retaining XFS for /home.
Every analysis I've done of the advantages of BtrFS for myself lead me to think that when its finished, when its done all the deduplication bits and more, it will be of more use for /home than for the RootFS.
It depends on the use case. On ARM it's common to depend on SD cards, and Btrfs will never pass corrupt data to user space. Corrupt user data is a kind of data loss, not good. But corrupt system files can lead to crashes and more corruption of both system files and user data. So, really not good.
But end users are another matter. End users make mistakes. End users might delete or overwrite a file and want to get back the previous version. Snappshoting of the user space is a major issue. That's why we have http://snapper.io/manpages/pam_snapper.html
Yes but this can be mitigated with statelessness also, similar to a mobile device, but with better and more plain language granularity. Instead of rolling back by date, just have one roll back state for different domains: system itself including non-app updates; apps; system settings; user settings; user data. The FHS does not help us with this separation, it's actually a hindrance.
I have a similar argument for "shared services", ISPs, that it is better to put user files on a SSD that the core binaries of {/usr/,}/bin and {/usr,}/lib since the shared binaries are going to be loaded once then resident but the real 'churn"' is with the users' application data and they want that loaded fast.
Sure or just use something like lvmcache or bcache and let the technology figure out what files are hot vs warm vs cold, and the optimization is dynamic rather than fixed. Sysadmins want to be able to template this stuff so they can get a bunch of servers or VM's up and running without having to do so much customization while still getting optimization. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 04:48 PM, Chris Murphy wrote:
It depends on the use case. On ARM it's common to depend on SD cards, and Btrfs will never pass corrupt data to user space. Corrupt user data is a kind of data loss, not good. But corrupt system files can lead to crashes and more corruption of both system files and user data. So, really not good.
The InfoSec concept of "Integrity" revolves around 'correctness'. Yes, obviously a file that has been mis-written, whatever the cause, is digital gobbledygook. Perhaps the checksum don't make sense; though that might be used to correct the data. Pinholes in the rust. Whatever. But there are other forms of a failure of Integrity. There's a whole class of 'finger problems' that corrupt the data without corrupting its digital integrity. The best recording mechanism in the world can't do anything about that. its an "Oh my ${DEITY}! I've just overwritten the annual report with my resignation letter!". or perhaps not that catastrophic. Maybe you changed your mind about something you wrote and the "undo' only undoes this edit session, what you want is the version you wrote the day before yesterday. And this isn't VMS. Its not that humans are fallible - well they are, but that's not my point. its that they are fickle and changeable. That's why I think snapshotting of user space is important. Its a human factor issue not a technical issue. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2016-03-24 at 17:00 -0400, Anton Aylward wrote:
But there are other forms of a failure of Integrity. There's a whole class of 'finger problems' that corrupt the data without corrupting its digital integrity. The best recording mechanism in the world can't do anything about that. its an "Oh my ${DEITY}! I've just overwritten the annual report with my resignation letter!". or perhaps not that catastrophic. Maybe you changed your mind about something you wrote and the "undo' only undoes this edit session, what you want is the version you wrote the day before yesterday. And this isn't VMS.
That's what I thought btrfs was for... on {home} it would be very useful. But snapshots are timed events, so they might not catch this. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb0XO0ACgkQtTMYHG2NR9WomwCglZFTX4L24aQY3WDIFbL/dvjn APkAnRknJwGnEAHXz8AAzDIDNUo0kJdc =oPvR -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 24 March 2016 at 22:32, Carlos E. R. <robin.listas@telefonica.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2016-03-24 at 17:00 -0400, Anton Aylward wrote:
But there are other forms of a failure of Integrity. There's a whole class of 'finger problems' that corrupt the data without corrupting its digital integrity. The best recording mechanism in the world can't do anything about that. its an "Oh my ${DEITY}! I've just overwritten the annual report with my resignation letter!". or perhaps not that catastrophic. Maybe you changed your mind about something you wrote and the "undo' only undoes this edit session, what you want is the version you wrote the day before yesterday. And this isn't VMS.
That's what I thought btrfs was for... on {home} it would be very useful. But snapshots are timed events, so they might not catch this.
Not necessarily (in fact, IIRC on all current openSUSE distributions not-at-all, snapper only activates on YaST actions, zypper actions, and I think maybe booting, but that could be something I hacked together for myself and forgot about) snapper has a number of other options to trigger based on user activity.. such as pam_snapper http://snapper.io/manpages/pam_snapper.html to create a snapshot for each user login and with the non-root users you can easily set up snapper to do whatever you want with your home directory https://lizards.opensuse.org/2012/10/16/snapper-for-everyone/ Conceptually, it's a simple as setting up a subvolume in btrfs, creating a snapper config for that subvolume, and then telling snapper to do it's thing whenever you want it to take a snapshot (snapper snapshots single subvolumes and doesn't cross subvolume boundries.. hence the default configuration of '/ root' and all the default subvolumes openSUSE has to make sure those '/ root' snapshots only contain the operating system and no temporary/transient data If you want to be particularly careful with a specific file, something like inotify could be used to make sure that snapper always takes a snapshot whenever a certain file is changed (note: If the file is changed a lot, you might need to tune snappers cleanup routines accordingly ;)) Regards, Richard -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I haven't read the beginning of the thread yet, but. I would just like to voice a feeling that personally I do not like snapshots at all. I have used them in LVM (thin) but I consider them risky. They are really not my style. They are basically volume or partition operations in a general sense that you use on a day-to-day basis. I know Windows does it too and yes I have used Windows snapshots and I have always not really liked it. I can see the advantage of making filesystem backups without disrupting e.g. databases, of course, but then still. Every one of those things could be designed around with better software that feels more like a real backup or an investment in software systems that can actually maintain their congruent state. System restore seems to be a call for "oh, let's forget about that, people can revert anyway". "Who cares if the system is fail-safe, we have a fail-safe right there!" "Our programs fuck up your computer? Don't worry! You can always get back to a previous state!" Time machine!. It was also my gripe with the only usable method for me to use Git (on a sidenote): you create backups of your git+data folder because reverting a backup is easier than reverting an actual destructive or possibly destructive action performed in Git. And I don't like that model. It gives incentives for bad software products because a disaster recovery mechanism is in place. You can easily see how much more Shoddy Windows has become since for example Windows XP/Windows 2000. With Windows 10 being a disaster full of bugs that bring it on par with a Kubuntu 15.10 daily update breaking a system by rendering it unbootable (just like, Yesterday). So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time. So you get a system that does what people can't do, you usually can't go back in time. You usually have to live with your mistakes, in that sense, and try to correct them or create a working state again. I wonder if the Universe itself contains a state-based backup. What does it do when something breaks down? Some books say that a person can go back in time slightly when they experience a near-death experience so as to divert a course of action that would lead to their death, if they don't want to die yet (on the 'other side'). But in general I think you can't do that and your efforts should be on a well-designed, safe, and invulnerable system that doesn't just blow up when you press the wrong button. (Like the bombs at the attacks in Brussels did. One went off after the fact when police had already cleared the area). Git itself is the most unusable system I have ever used. It tops Linux by a fairly large margin as well. Git is like an NGO: 95% of resources go to upkeep and maintenance, and only 5% gets put to actual use ;-). So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad. Like Sandboxing - Windows doesn't have sandboxing unless you buy some commercial tool. Sandboxing is one of the easiest things ever to implement and achieve, it's actually easier than letting software impact the system directly and then deal with the resulting mayhem. Every package should be its own circle, and these circles may overlap, and update-alternatives is a way to achieve that, choosing the circle that lies on top. Packaging in general is not perfect but it does allow for a definition of what the complete boundary of a collection of files belonging to the same unit, is. Maybe normally they don't keep to themselves and only affect their own circle: the entire system is affected. But nonetheless, conflicts that would be allowed in a circle-overlap system, are now simply resolved. In is nearly similar in the sense that the circles now do not overlap, but you just ensure that there aren't a great deal of circles that actually touch each other. If one offends, you throw it out and you have a conflict. Packaging by definition is a revertable thing, any action can normally be undone and so the system in principle is always in a consistent state. That's the whole idea of it anyway. It nears the idea of a sandbox because actions and changes are getting registered. Whereas if you have some "make install" script that is the opposite of a sandbox. And if the tools are right, the methods used are proper, and the definitions are okay, an empowered user would be able to solve any difficulties that may arise provided the dependencies etc., there are no errors in the packages themselves. It all stands on that. Cause instead of having a million files, you only have a 1000 packages, like that. But now systems are allowed to go bankrupt because we can go back in time anyway. Tools to revert (or progress) to a new stable state are no longer needed. Just go back in time. Problem solved. You mess up? Go back in time, it is easier than actually trying to solve a problem. Op 25-3-2016 om 00:39 schreef Richard Brown:
On 24 March 2016 at 22:32, Carlos E. R. <robin.listas@telefonica.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2016-03-24 at 17:00 -0400, Anton Aylward wrote:
But there are other forms of a failure of Integrity. There's a whole class of 'finger problems' that corrupt the data without corrupting its digital integrity. The best recording mechanism in the world can't do anything about that. its an "Oh my ${DEITY}! I've just overwritten the annual report with my resignation letter!". or perhaps not that catastrophic. Maybe you changed your mind about something you wrote and the "undo' only undoes this edit session, what you want is the version you wrote the day before yesterday. And this isn't VMS.
That's what I thought btrfs was for... on {home} it would be very useful. But snapshots are timed events, so they might not catch this. Not necessarily (in fact, IIRC on all current openSUSE distributions not-at-all, snapper only activates on YaST actions, zypper actions, and I think maybe booting, but that could be something I hacked together for myself and forgot about)
snapper has a number of other options to trigger based on user activity.. such as pam_snapper http://snapper.io/manpages/pam_snapper.html to create a snapshot for each user login and with the non-root users you can easily set up snapper to do whatever you want with your home directory https://lizards.opensuse.org/2012/10/16/snapper-for-everyone/
Conceptually, it's a simple as setting up a subvolume in btrfs, creating a snapper config for that subvolume, and then telling snapper to do it's thing whenever you want it to take a snapshot
(snapper snapshots single subvolumes and doesn't cross subvolume boundries.. hence the default configuration of '/ root' and all the default subvolumes openSUSE has to make sure those '/ root' snapshots only contain the operating system and no temporary/transient data
If you want to be particularly careful with a specific file, something like inotify could be used to make sure that snapper always takes a snapshot whenever a certain file is changed (note: If the file is changed a lot, you might need to tune snappers cleanup routines accordingly ;))
Regards,
Richard
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Mar 24, 2016 at 6:35 PM, Xen <list@xenhideout.nl> wrote: .
And I don't like that model. It gives incentives for bad software products because a disaster recovery mechanism is in place. You can easily see how much more Shoddy Windows has become since for example Windows XP/Windows 2000.
I don't see at all how 19 is more shoddy than XP, and seeing as all of these versions have the same ridiculous update mechanism it doesn't seem related to the quality or reliability of the OS.
So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time.
Versus starting over with a clean installation? That is the original rollback.
So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad.
Hmm, broken state or reinstall. You get away with this when the testing is monumental, like what Apple does, who have no reversion options for updates. OS X, you update to a sub version, that's it, you can't undo it. But they also do a metric ton of testing. It's so complicated now that they even have expanded their pool to public beta testers. For iOS, there isn't even a revert possible. You can only reset which obliterates apps, settings and user data, but not the most recent update you applied. Nah, I'll take a snapshot and wait a week thanks. Another way forward is Fedora's atomic/rpm-ostree project. And CoreOS has a similar strategy. These are specifically versioned trees which have specific binaries in them. Anyone who deploys a particular tree version has the identical system binaries as anyone else with that tree version; compared to the very non-determinstic situation we have with package managed systems.
You mess up? Go back in time, it is easier than actually trying to solve a problem.
Sorry, lame and unconvincing argument against snapshots and rollbacks. Your method basically depends on the user getting a broken system somehow communicating their misery to developers who then do a better job. Users getting mad causes software quality to improve? It's not how things work. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
It seems you have a problem with people giving their opinion. Op 25-3-2016 om 04:27 schreef Chris Murphy:
And I don't like that model. It gives incentives for bad software products because a disaster recovery mechanism is in place. You can easily see how much more Shoddy Windows has become since for example Windows XP/Windows 2000. I don't see at all how 19 is more shoddy than XP, and seeing as all of
On Thu, Mar 24, 2016 at 6:35 PM, Xen <list@xenhideout.nl> wrote: . these versions have the same ridiculous update mechanism it doesn't seem related to the quality or reliability of the OS.
Windows' update mechanism (I take it you mean Windows Update) has nothing to do with my argument as it is neither to do with the disaster recovery (if that were a reason) nor does the update mechanism say all about the OS. So your argument doesn't follow. The shoddiness of the OS (that you contest, with no good reason) and its evolution was posited as a result of not having to make good software because you can revert. Granted, there may be many more reasons for Microsoft's recent change in how they develop their software. But they've also gone in the direction of Linux ;-). The release cycle has become much smaller (faster) and current day Windows versions are released with the idea of "We'll fix it later". If that doesn't sound like "If something goes bad, we'll just tell them to use System Restore" I don't know what. From my recollection Windows has gotten more recovery options (in the boot environment of the installer) but I don't know really. They seem to depend on automatic recovery options a lot. My experience though as vastly changed since XP, I don't know about yours. But the update mechanism did change, because since Windows 8 (or 7) the system will install updates during shutdown and bootup, which it didn't do in XP. So I don't really know what your argument is based on, but from my perspective Microsoft Windows did come to rely more on the system restore functionality now. And logically, it is a pretty safe argument that when a system restore is in place, the requirement for software to always function as well, becomes less. It is a pretty simple argument you know.
So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time. Versus starting over with a clean installation? That is the original rollback.
What are you stating? It is pretty clear that if my argument is sound, that an expensive rollback would give more incentive to write good software, and an inexpensive one would henceforth give less incentive to write quality software, because the pain of messing up becomes less. Let me spell it out for you then. Cost of system failure is X. Cost of ensuring proper operation is Y. Cost of rollback is Z. If X is a constant and Z is a variable depending on e.g. the availability of backups or snapshots, and Y is a variable relative to how good something has to be. Then with high Z Y can also be higher because Z and Y are both costs in avoiding X. With low Z the ease of avoiding X no longer warrants a high expenditure on Y, because even though with low Y the risk of X goes up (chance goes up) the actual impact of it goes down because you can more quickly recover. Therefore, if risk is really a calculation of P*X = R, and we call P the chance of X happening, and X the cost of it happening..... Then R = PX and P might at first be in an inverse relationship with Y, so we could suggest that P ~ 1/Y. In terms of the actual expenditure of Y we make. We spend more in Y, P goes down. We spend less in Y, P goes up. "Chance" then relates to frequency of disruptive events happening. But the actual COST of a disruptive event diminishes greatly when a recovery is easily done. More disruption but faster recovery = about the same thing. So actually X is not constant and we could say it is related to Z, we might say they are linearly related even. X ~ Z. Cost of disaster is actually determined by cost of rollback, and with easier rollback, RISK goes down, because we have lower X, the same P, but lower Z means lower PX means lower R. This is intuitive and logical. Easy rollback, lower risk. But since the expenditure in Y was warranted by the importance of a functioning system and was always spent to GET this functioning system, and this yields us some benefit B, then the ultimate benefit B(net) could be B - R. We have a certain benefit, but it gets reduced by failure. Y may be related to B as long as Y is meant for introducing features and so on, core functionality but not core reliance which is like another dimension to it. It is on another axis. So we could split it up to a Y(functionality) and some Y(robustness). Functionality always needs to be there and it doesn't take up the lion's share of development I think. However, whatever we may think, Y(robustness) is warranted by a need to reduce the cost of failure. If there was no cost to failure, there didn't need to be any robustness at all, cause apparently either we don't need it to keep working, or we don't lose money (time, energy, ....) when it fails. So if the call to spend on Y (robustness) is warranted by the risk R of system failure, then a lower risk will want less money to be spent on robustness. It is just less needed. You can get the same ultimate benefit with less money because recovery is so easy that it doesn't matter if something fucks up really bad. On a regular basis. B(net) = B - PX. If we have the same amount of money available and we need to spend less on Y(robustness) it means more is available for Y(functionality) which means B goes up and we get a sort of runaway system where it can no longer be warranted to spend on robustness (to that extent that recovery still has a cost) because with the same money we can also get more functionality! (Just assuming these are the only two dimensions). M = money available == Y(r) + Y(f) B is now a function of Y(f) P is a function of Y(r) X is a function of Z. Y is expenditure. Z is cost of recovery X is cost of failure P is chance of failure. That means the ultimate net benefit is a function of Y(f), Y(r), and Z, in some way. Such that we could simply suggest that: B(net) = a. Y(f) - b/ Y(r) * c. Z, where a b and c are just some constants, and we just assume linearity everywhere. So disregarding the constants: B(net) = Y(f) - 1/Y(r) * Z = Y(f) - Z/Y(r) B(net) = Y(f) - Z/Y(r) Lower cost of recovery, higher benefit. Higher expenditure in robustness, higher benefit BUT total expenditure on development is Y(f) + Y(r) Higher expenditure in functionality, higher benefit. So there are two equations: B(net) = Y(f) - Z/Y(r) and M = Y(f) + Y(r) or Y(r) = M - Y(f) Of course this would be a dynamic system with differential equations, but. Substituting M - Y(f) for Y(r) yields: B(net) = Y(f) - Z / (M - Y(f)). And the other way around: B(net) = M - Y(r) - Z / Y(r) Discounting M now: B(net) = -( Y(r) + Z / Y(r) ). As Y(r) goes up, benefit goes down (because Y(f) goes down but it also goes up because the cost of failure goes down. However I think it would be easy to prove (if you had full data and real equations on this) (not sure I can do it though) that: If Z is very low, the mitigating impact of 1 / Y(r) becomes less important and the detrimental effect of Y(r) becomes more prevalent as you spend less on features and more on robustness while it has no direct economical benefits because disaster recovery is so cheap. Now the more fine-grained this thing is, the easier it will probably get to recover from something. However I conjecture that fine-grained control could actually increase the enjoyment of the system to such an extent that it becomes a feature in itself. Because it becomes like a version history control system right. It becomes a versioning system. It basically starts to mean that all files are getting versioned. Fine-grained snapshotting is actually a crude but effective and perhaps very fun and pleasant VCS. When THAT happens resources are actually freed once more for other stuff. It could empower people so much. That it's a form of robustness in itself as well. No longer disaster recovery. But a robust system. And what I mean is that the joy of that could prompt people to become more productive in their linux systems, also enhancing development and also creating the software quality that I want so much. Because the more enjoyable Linux is, the more enjoyable it can become. Right now we're still stuck in something that really slows down development speed and pace. A really good fine-grained filesystem-level versioning system could actually make it a bliss to use in that sense (if the interface to it would be any good, ie. integrated into Dolphin etc). But apart from that conjectured benefit and relationship: B(net) = M - Y(r) - Z / Y(r) Until that makes a difference (and you'd really need file-level control for that) (and very regular micro-snapshots) (etc. etc., you can think of it) -- A low Z the negative term Y(r) becomes much more important than the mitigating term 1/Y(r) and spending money on robustness would become (or be, already) a DETRIMENTAL factor on total net benefit of your system. But so the thing really is more interesting because a very low cost (or rather high functionality) recovery system is in itself a boon that can speed up development of all facets of a system, including robustness etc. etc. etc. Because the more you are at ease, the more space you have in yourself to look at the details. So from this I conclude two things: - if a recovery system is only to be used for full system recovery, it will lessen the incentive to make quality software - if a recovery system is fine grained enough and usable enough, it will heighten the expediency of development.
So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad. Hmm, broken state or reinstall. You get away with this when the testing is monumental, like what Apple does, who have no reversion options for updates. OS X, you update to a sub version, that's it, you can't undo it. But they also do a metric ton of testing. It's so complicated now that they even have expanded their pool to public beta testers. For iOS, there isn't even a revert possible. You can only reset which obliterates apps, settings and user data, but not the most recent update you applied.
You mean when software is very high quality, you can get away with having no recovery. That's what I'm saying, but in reverse. If you have good recovery, you can also get away more with low quality. And that is what it does. So I'm not saying good recovery is bad given the current status quo of the system. I was talking about the future. I'm not saying, throw away what you have. I'm saying, don't make it your holy boon, because you'll lose the appetite for anything else. A LOT in the Linux world is focused on BTRFS now, from what I gather. But if you make that the height and pinnacle of your development, and start to develop all kinds of systems around it. I see no benefit in that. Cause you're starting to identify as that thing that doesn't do anything useful, but if it breaks at least you can always go back. The focus should be on functionality or benefit foremost, and not on the means to ensure that you don't lose it. Because if you have nothing, not losing it doesn't make a difference either. So I would like to just say: make sure your updates DO in fact not break systems. If testing is a problem, don't update kernels constantly. Don't push updates that can break things or are uncertain. Focus on improving the robustness and stability, resilience and splendor of the boot process. I think Linux has a LOT of ground to gain there. It could be made so much more simpler and easier to understand as well as (also because of that) more resilient to failure. If you have a system in place that pretty much ensures that nothing goes wrong (and is also very configurable) then the nerves go out of it and a lot of time is no longer wasted on fixing those issues, and you would even need to do less testing as well. DO try to create perfection and don't just rely on fail-safe mechanisms when you consider that thing are inevitable going to break.
Nah, I'll take a snapshot and wait a week thanks.
Another way forward is Fedora's atomic/rpm-ostree project. And CoreOS has a similar strategy. These are specifically versioned trees which have specific binaries in them. Anyone who deploys a particular tree version has the identical system binaries as anyone else with that tree version; compared to the very non-determinstic situation we have with package managed systems.
Reminds me of something............... reminds me of Git lol. Yes like I said, versioning. I think versioning is a great boon to any system. Right now personally I have two issues: - I don't like to use Git for the majority of my files - I don't have anything else ;-). If you really had a GOOD versioning system that you could use for EVERYTHING you wanted including a remote storage. That would be awesome right. I guess I like the tree system better. As long as you can still install what you want too.
You mess up? Go back in time, it is easier than actually trying to solve a problem. Sorry, lame and unconvincing argument against snapshots and rollbacks. Your method basically depends on the user getting a broken system somehow communicating their misery to developers who then do a better job. Users getting mad causes software quality to improve? It's not how things work.
That is just the fallacy of your open source mindset. If you created good software in the first place, there'd be no broken systems all the time. So basically you're saying: nothing will work anyway, just empower the users to recover from all of the vary and manious (many and various ;-)) errors that are going to consistently keep happening. You're basically also equating madness to feedback, or feedback to madness. If I say "Thing A and B don't work, and C just got worse" that is not madness, that is information. Also, this system you describe is already in place, it is the bug trackers. Genuine listening of developers to users in the developing world of open source software has never really happened yet though. In general users are seen as co-workers and participants in the development of the system. Hence, they are seen as someone having a duty, not someone having a right. Or, they are seen as someone having a right that can only result from them performing their duty. This implies then that using the software is not good enough of a reason to have your voice get heard. This then implies that reports of congruence of what the system does to what it should do are basically considered to be rather unimportant. This is a loss of feedback, and hence of information that would otherwise inform and inspire the development process. I say the system and the cycle is bugged and not operating very well. If users are only seen as "those people who complain" as you now just described them. Then yeah you're not really gonna bother with them will you. And if they say "well I don't like this" you go "well it's your fault". The user is always wrong. One of Linux' founding principles ;-). It's always their fault because they should always have put in more effort to use the system right. No amount of effort is ever enough, because theoretically every problem ever existing can be easily solved by just throwing infinite amounts of man-hours at it. If a minor feature doesn't work, people say (in a certain sense): just learn this language, and this system, this debugging tool, this profiler, and then after you've spend 20.000 hours learning all of that, you will be able to fix that minor error that you just reported. Good luck!!!!. That's really a way of mocking someone you know. But anyway. I don't see how users not getting mad causes software to improve either. If there is no incentive to do it. And I'm not saying anger should be the reason to change everything, but it can be a good reason to change something. Your customers (users) not getting their work done should be the PRIME reason to fix or improve something. The reality of their experience should be what you are after in the first place. If you then don't care about that experience, then why the fuck are you developing for? Madness only arises because people feel you have taken up a responsibility you are not living up to, OR not acknowledging that you have it. Madness only arises because people are not getting heard. If you do listen to your users, provide easy ways for them to give feedback, listen to complaints etcetera, people don't stay mad. People usually just want their issue to be understood because being understood means it is going to get worked on by not having to do anything else at all. Without any sort of intervention for them to be required after that. I once bought a software, then debated with the author why it sucked, he said I don't agree with everything but what you just said hit a personal chord with me, and I left it at that knowing he would do the right thing. I haven't looked back at it since. I gained the right of being critical by paying for his software. As a wannabe-user I had no right to complain, but as a paying customer, I did. So I used it for that basically. In Open Source..... it is different if it is free, and really the only people who have the right to say something are the ones to think about a solution. You have to approach it from the corner of someone willing to offer thinking power. But nonetheless, that contribution is often neglected because it may come across as judgemental. You say "I see this and this wrong" and they go "WTF you talking about you moron?". But if you can't talk about wrongness you also cannot talk about rightness. I have had this argument here before ;-). You must be willing to admit faults before you are able to fix them. You first have to know where you are at, before you know which direction to go. And a looot of people don't want to hear that there's anything wrong with their software. But regardless. That's how it works. That's how it works in Open Source. Or usually, how it doesn't work because of people's egos. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Two more things I guess: 1. You might just as well say that benefit is a function of functionality times the amount of time it is available. So you could say that B = F * A, with everything being self-explanatory there, and A going down by system failure but being pushed up by speedy recovery. 2. I'm not saying it's a good thing if people have broken systems. But solving a problem using recovery (go back in time) does mean you never learn how to solve the problem by going forward. My Git strategy of revering backup copies of a working directory ensures that I never really learn how to fix the most common errors unless I give it a lot of attention and put in a lot of interest. Basically, It ensures that I stay as bad at the system as I was before, unless I pick up a tutorial and really start learning stuff. Normally when problems are capable of being solved, you solve them by working about a solution in advance of time, in the sense that you are working towards the future, and learning those problem solving skills along the way. If recovery is so cheap that it is always faster to just go back in time and try something else (for instance) you lose all sense of wanting to move forward in order to return to a consistent state. Reverting requires no knowledge whatsoever (from that viewpoint). Reverting is a technique that evades the problem. You don't deal with the problem, you go around it. Hence you do not learn. And I'm not saying it is good to be frustrated by Linux on end because you can't get something working, and if productivity is more important in the short run you may prefer to just get it done quickly and not have to worry about it. But for the developers that can't be the reason to have it this way. Robust system with pricy recovery for me is preferable in that sense to non-robust system with speedy recovery. Why? I don't like it when things break in the first place. I don't consider the lack of sandboxing in Microsoft Windows a good reason to continue to accept that I have to use System Restore for that, and I don't consider System Restore a good reason for the continuing situation that there is no sandboxing available. I don't think it's a good excuse. That's all. Op 25-3-2016 om 08:31 schreef Xen:
It seems you have a problem with people giving their opinion.
Op 25-3-2016 om 04:27 schreef Chris Murphy:
And I don't like that model. It gives incentives for bad software products because a disaster recovery mechanism is in place. You can easily see how much more Shoddy Windows has become since for example Windows XP/Windows 2000. I don't see at all how 19 is more shoddy than XP, and seeing as all of
On Thu, Mar 24, 2016 at 6:35 PM, Xen <list@xenhideout.nl> wrote: . these versions have the same ridiculous update mechanism it doesn't seem related to the quality or reliability of the OS.
Windows' update mechanism (I take it you mean Windows Update) has nothing to do with my argument as it is neither to do with the disaster recovery (if that were a reason) nor does the update mechanism say all about the OS. So your argument doesn't follow. The shoddiness of the OS (that you contest, with no good reason) and its evolution was posited as a result of not having to make good software because you can revert. Granted, there may be many more reasons for Microsoft's recent change in how they develop their software. But they've also gone in the direction of Linux ;-). The release cycle has become much smaller (faster) and current day Windows versions are released with the idea of "We'll fix it later". If that doesn't sound like "If something goes bad, we'll just tell them to use System Restore" I don't know what.
From my recollection Windows has gotten more recovery options (in the boot environment of the installer) but I don't know really. They seem to depend on automatic recovery options a lot. My experience though as vastly changed since XP, I don't know about yours. But the update mechanism did change, because since Windows 8 (or 7) the system will install updates during shutdown and bootup, which it didn't do in XP. So I don't really know what your argument is based on, but from my perspective Microsoft Windows did come to rely more on the system restore functionality now. And logically, it is a pretty safe argument that when a system restore is in place, the requirement for software to always function as well, becomes less.
It is a pretty simple argument you know.
So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time. Versus starting over with a clean installation? That is the original rollback.
What are you stating? It is pretty clear that if my argument is sound, that an expensive rollback would give more incentive to write good software, and an inexpensive one would henceforth give less incentive to write quality software, because the pain of messing up becomes less.
Let me spell it out for you then.
Cost of system failure is X. Cost of ensuring proper operation is Y. Cost of rollback is Z.
If X is a constant and Z is a variable depending on e.g. the availability of backups or snapshots, and Y is a variable relative to how good something has to be. Then with high Z Y can also be higher because Z and Y are both costs in avoiding X. With low Z the ease of avoiding X no longer warrants a high expenditure on Y, because even though with low Y the risk of X goes up (chance goes up) the actual impact of it goes down because you can more quickly recover. Therefore, if risk is really a calculation of P*X = R, and we call P the chance of X happening, and X the cost of it happening.....
Then R = PX and P might at first be in an inverse relationship with Y, so we could suggest that P ~ 1/Y. In terms of the actual expenditure of Y we make. We spend more in Y, P goes down.
We spend less in Y, P goes up.
"Chance" then relates to frequency of disruptive events happening.
But the actual COST of a disruptive event diminishes greatly when a recovery is easily done. More disruption but faster recovery = about the same thing.
So actually X is not constant and we could say it is related to Z, we might say they are linearly related even.
X ~ Z.
Cost of disaster is actually determined by cost of rollback, and with easier rollback, RISK goes down, because we have lower X, the same P, but lower Z means lower PX means lower R. This is intuitive and logical. Easy rollback, lower risk.
But since the expenditure in Y was warranted by the importance of a functioning system and was always spent to GET this functioning system, and this yields us some benefit B, then the ultimate benefit B(net) could be B - R.
We have a certain benefit, but it gets reduced by failure.
Y may be related to B as long as Y is meant for introducing features and so on, core functionality but not core reliance which is like another dimension to it. It is on another axis.
So we could split it up to a Y(functionality) and some Y(robustness). Functionality always needs to be there and it doesn't take up the lion's share of development I think. However, whatever we may think, Y(robustness) is warranted by a need to reduce the cost of failure. If there was no cost to failure, there didn't need to be any robustness at all, cause apparently either we don't need it to keep working, or we don't lose money (time, energy, ....) when it fails.
So if the call to spend on Y (robustness) is warranted by the risk R of system failure, then a lower risk will want less money to be spent on robustness. It is just less needed. You can get the same ultimate benefit with less money because recovery is so easy that it doesn't matter if something fucks up really bad. On a regular basis.
B(net) = B - PX.
If we have the same amount of money available and we need to spend less on Y(robustness) it means more is available for Y(functionality) which means B goes up and we get a sort of runaway system where it can no longer be warranted to spend on robustness (to that extent that recovery still has a cost) because with the same money we can also get more functionality! (Just assuming these are the only two dimensions).
M = money available == Y(r) + Y(f)
B is now a function of Y(f) P is a function of Y(r) X is a function of Z.
Y is expenditure. Z is cost of recovery X is cost of failure P is chance of failure.
That means the ultimate net benefit is a function of Y(f), Y(r), and Z, in some way.
Such that we could simply suggest that: B(net) = a. Y(f) - b/ Y(r) * c. Z, where a b and c are just some constants, and we just assume linearity everywhere.
So disregarding the constants: B(net) = Y(f) - 1/Y(r) * Z = Y(f) - Z/Y(r)
B(net) = Y(f) - Z/Y(r)
Lower cost of recovery, higher benefit. Higher expenditure in robustness, higher benefit BUT total expenditure on development is Y(f) + Y(r)
Higher expenditure in functionality, higher benefit.
So there are two equations: B(net) = Y(f) - Z/Y(r) and M = Y(f) + Y(r) or Y(r) = M - Y(f)
Of course this would be a dynamic system with differential equations, but.
Substituting M - Y(f) for Y(r) yields:
B(net) = Y(f) - Z / (M - Y(f)).
And the other way around:
B(net) = M - Y(r) - Z / Y(r)
Discounting M now:
B(net) = -( Y(r) + Z / Y(r) ).
As Y(r) goes up, benefit goes down (because Y(f) goes down but it also goes up because the cost of failure goes down.
However I think it would be easy to prove (if you had full data and real equations on this) (not sure I can do it though) that:
If Z is very low, the mitigating impact of 1 / Y(r) becomes less important and the detrimental effect of Y(r) becomes more prevalent as you spend less on features and more on robustness while it has no direct economical benefits because disaster recovery is so cheap.
Now the more fine-grained this thing is, the easier it will probably get to recover from something. However I conjecture that fine-grained control could actually increase the enjoyment of the system to such an extent that it becomes a feature in itself. Because it becomes like a version history control system right. It becomes a versioning system. It basically starts to mean that all files are getting versioned. Fine-grained snapshotting is actually a crude but effective and perhaps very fun and pleasant VCS.
When THAT happens resources are actually freed once more for other stuff. It could empower people so much. That it's a form of robustness in itself as well. No longer disaster recovery. But a robust system.
And what I mean is that the joy of that could prompt people to become more productive in their linux systems, also enhancing development and also creating the software quality that I want so much.
Because the more enjoyable Linux is, the more enjoyable it can become.
Right now we're still stuck in something that really slows down development speed and pace.
A really good fine-grained filesystem-level versioning system could actually make it a bliss to use in that sense (if the interface to it would be any good, ie. integrated into Dolphin etc).
But apart from that conjectured benefit and relationship:
B(net) = M - Y(r) - Z / Y(r)
Until that makes a difference (and you'd really need file-level control for that) (and very regular micro-snapshots) (etc. etc., you can think of it) --
A low Z the negative term Y(r) becomes much more important than the mitigating term 1/Y(r) and spending money on robustness would become (or be, already) a DETRIMENTAL factor on total net benefit of your system.
But so the thing really is more interesting because a very low cost (or rather high functionality) recovery system is in itself a boon that can speed up development of all facets of a system, including robustness etc. etc. etc.
Because the more you are at ease, the more space you have in yourself to look at the details.
So from this I conclude two things:
- if a recovery system is only to be used for full system recovery, it will lessen the incentive to make quality software - if a recovery system is fine grained enough and usable enough, it will heighten the expediency of development.
So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad. Hmm, broken state or reinstall. You get away with this when the testing is monumental, like what Apple does, who have no reversion options for updates. OS X, you update to a sub version, that's it, you can't undo it. But they also do a metric ton of testing. It's so complicated now that they even have expanded their pool to public beta testers. For iOS, there isn't even a revert possible. You can only reset which obliterates apps, settings and user data, but not the most recent update you applied.
You mean when software is very high quality, you can get away with having no recovery. That's what I'm saying, but in reverse. If you have good recovery, you can also get away more with low quality. And that is what it does.
So I'm not saying good recovery is bad given the current status quo of the system. I was talking about the future. I'm not saying, throw away what you have. I'm saying, don't make it your holy boon, because you'll lose the appetite for anything else.
A LOT in the Linux world is focused on BTRFS now, from what I gather. But if you make that the height and pinnacle of your development, and start to develop all kinds of systems around it. I see no benefit in that.
Cause you're starting to identify as that thing that doesn't do anything useful, but if it breaks at least you can always go back. The focus should be on functionality or benefit foremost, and not on the means to ensure that you don't lose it. Because if you have nothing, not losing it doesn't make a difference either.
So I would like to just say: make sure your updates DO in fact not break systems. If testing is a problem, don't update kernels constantly. Don't push updates that can break things or are uncertain. Focus on improving the robustness and stability, resilience and splendor of the boot process. I think Linux has a LOT of ground to gain there.
It could be made so much more simpler and easier to understand as well as (also because of that) more resilient to failure. If you have a system in place that pretty much ensures that nothing goes wrong (and is also very configurable) then the nerves go out of it and a lot of time is no longer wasted on fixing those issues, and you would even need to do less testing as well.
DO try to create perfection and don't just rely on fail-safe mechanisms when you consider that thing are inevitable going to break.
Nah, I'll take a snapshot and wait a week thanks.
Another way forward is Fedora's atomic/rpm-ostree project. And CoreOS has a similar strategy. These are specifically versioned trees which have specific binaries in them. Anyone who deploys a particular tree version has the identical system binaries as anyone else with that tree version; compared to the very non-determinstic situation we have with package managed systems.
Reminds me of something............... reminds me of Git lol.
Yes like I said, versioning. I think versioning is a great boon to any system.
Right now personally I have two issues:
- I don't like to use Git for the majority of my files - I don't have anything else ;-).
If you really had a GOOD versioning system that you could use for EVERYTHING you wanted including a remote storage.
That would be awesome right. I guess I like the tree system better. As long as you can still install what you want too.
You mess up? Go back in time, it is easier than actually trying to solve a problem. Sorry, lame and unconvincing argument against snapshots and rollbacks. Your method basically depends on the user getting a broken system somehow communicating their misery to developers who then do a better job. Users getting mad causes software quality to improve? It's not how things work.
That is just the fallacy of your open source mindset. If you created good software in the first place, there'd be no broken systems all the time. So basically you're saying: nothing will work anyway, just empower the users to recover from all of the vary and manious (many and various ;-)) errors that are going to consistently keep happening.
You're basically also equating madness to feedback, or feedback to madness. If I say "Thing A and B don't work, and C just got worse" that is not madness, that is information. Also, this system you describe is already in place, it is the bug trackers.
Genuine listening of developers to users in the developing world of open source software has never really happened yet though. In general users are seen as co-workers and participants in the development of the system. Hence, they are seen as someone having a duty, not someone having a right. Or, they are seen as someone having a right that can only result from them performing their duty.
This implies then that using the software is not good enough of a reason to have your voice get heard. This then implies that reports of congruence of what the system does to what it should do are basically considered to be rather unimportant. This is a loss of feedback, and hence of information that would otherwise inform and inspire the development process. I say the system and the cycle is bugged and not operating very well.
If users are only seen as "those people who complain" as you now just described them. Then yeah you're not really gonna bother with them will you. And if they say "well I don't like this" you go "well it's your fault". The user is always wrong. One of Linux' founding principles ;-).
It's always their fault because they should always have put in more effort to use the system right. No amount of effort is ever enough, because theoretically every problem ever existing can be easily solved by just throwing infinite amounts of man-hours at it. If a minor feature doesn't work, people say (in a certain sense): just learn this language, and this system, this debugging tool, this profiler, and then after you've spend 20.000 hours learning all of that, you will be able to fix that minor error that you just reported. Good luck!!!!.
That's really a way of mocking someone you know.
But anyway. I don't see how users not getting mad causes software to improve either. If there is no incentive to do it. And I'm not saying anger should be the reason to change everything, but it can be a good reason to change something. Your customers (users) not getting their work done should be the PRIME reason to fix or improve something. The reality of their experience should be what you are after in the first place. If you then don't care about that experience, then why the fuck are you developing for? Madness only arises because people feel you have taken up a responsibility you are not living up to, OR not acknowledging that you have it.
Madness only arises because people are not getting heard. If you do listen to your users, provide easy ways for them to give feedback, listen to complaints etcetera, people don't stay mad. People usually just want their issue to be understood because being understood means it is going to get worked on by not having to do anything else at all. Without any sort of intervention for them to be required after that.
I once bought a software, then debated with the author why it sucked, he said I don't agree with everything but what you just said hit a personal chord with me, and I left it at that knowing he would do the right thing. I haven't looked back at it since. I gained the right of being critical by paying for his software. As a wannabe-user I had no right to complain, but as a paying customer, I did. So I used it for that basically.
In Open Source..... it is different if it is free, and really the only people who have the right to say something are the ones to think about a solution. You have to approach it from the corner of someone willing to offer thinking power. But nonetheless, that contribution is often neglected because it may come across as judgemental. You say "I see this and this wrong" and they go "WTF you talking about you moron?". But if you can't talk about wrongness you also cannot talk about rightness.
I have had this argument here before ;-). You must be willing to admit faults before you are able to fix them. You first have to know where you are at, before you know which direction to go. And a looot of people don't want to hear that there's anything wrong with their software.
But regardless. That's how it works. That's how it works in Open Source. Or usually, how it doesn't work because of people's egos.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Ooh a thread about development expediency and quality... I wonder how much hate I'm going to get for sharing my feelings on this topic :) On 25 March 2016 at 08:31, Xen <list@xenhideout.nl> wrote: <a lot of stuff about quality and development expediency> You make some very interesting points which sound very reasonable, but I disagree with most of them To try and keep my points succicent, I'd summarise my points as follows: 1) The availability of rollback functionality for users 'systems-in-the-field' has no real world impact on the development pace of release-based distributions 2) Software Quality in release-based distributions benefits from longer release schedules 3) In the absence of software release schedules (ie. in a rolling release), automated testing can provide proactive assurance that all key use cases are not broken by rapid development processes 4) Tying that automated testing as a gate that must be passed before any software releases slows development to ensure quality does not suffer 5) Using that same automated testing gate improves software quality in release-based distributions also 6) System rollback with a rolling release is an extra safety net which helps ensure users don't suffer from any deficiencies in the automated testing 7) System rollback with ANY distribution is, in my mind, essential because there are thousands of ways a user or 3rd party software can ruin a system. I'll expand on these individually 1) Look at openSUSE. We've had snapper/btrfs in the distribution for years. We've had it as a default since 13.2. And yet the trend for openSUSE releases has been to *extend* the release cycle, from the previous 8 months to the current 12 months. Just because we have snapper by default doesn't mean we speed up software development 2) I think can be accepted without too much argument. The longer something is developed, the more testing, the more time and effort we expend on polishing it. The balancing act here is ensuring that when you actually release the final product is still up-to-date enough that it is interesting and useful to the people who want to use it. This was one of the key motivators for the direction Leap is taking (More stable/unchanging as a general goal, and using an Enterprise codebase to achieve that) while we have Tumbleweed as the counter balance without a release schedule. 3) openQA really is magic. Every single build of Tumbleweed gets tested with over 100 different scenarios before it is released. Software RAID, encrypted lvm, dual booting with windows 8, filesystems, KDE, GNOME, lvm with RAID 1, memtest, minimal X, split /usr, textmode, uefi, uefi with secure boot, uefi with USB booting, updating from 12.x and 13.x, network installs, live CD's, and more are tested automatically, with image and log based automatic assessment of the results. ie. When Tumbleweed ships, openQA knows that every screen it cares about *looks* the way we want it to look for a user, and every command it typed *acts* the way we want it to act. That is a broader coverage ensuring the functionality of our Linux distributions than most corporate manual QA departments can manage with several weeks of human testing..and openQA does it every day..sometimes twice a day.. 4) Tumbleweed uses openQA as an integrated part of the software development process. Even before any new package hits any distribution, incoming submit requests are 'staged', the Build Service makes 'what-if' DVD's that contain the changeset from the submit request, and then openQA does brief testing to ensure the OS is still valid. If it fails, the package isn't allowed anywhere near any of our distribution repos. Then only when it is accepted, full system validation kicks off with the breadth I described in 3). 5) For Tumbleweed, such a process as 4) is necessary in order for the rolling release to be viable, but it's proven itself so effective at ensuring quality it's also used not only by Leap but also by the SUSE Linux Enterprise development teams. Even with a 'traditional' release-schedule which provides time for manual QA, it's beneficial having a constant picture of hundreds of different installation, configuration, and production scenarios. openQA can keep track of that picture for every single development build, not only Milestones, like Alphas and Betas which undergo manual testing, but all the intermediate builds that occur as things rapidly change in each distributions OBS Projects. 6) So, yes, in one sense you're right. Tumbleweed moves fast, and only relies on magical automated testing, and so system rollback is a pretty good idea for Tumbleweed users incase something slips past the magical automated quality testing robot that is openQA.. that said, as an avid Tumbleweed user, I have to admit that in the last 2 years the only time I've had to use snapper is when *I* have screwed up my machine doing stuff that *I* should not have done..so I am more dangerous than a rapid rolling development model.. 7) End of the day, the discussion of quality is actually mostly irrelevant when it comes to a discussion about snapper - There are ~2500 packages in SUSE Linux Enterprise - There are ~7500 packages in openSUSE Leap and Tumbleweed - There are *tens of thousands* of packages in OBS. These have no testing. These are not integrated with our distributions. Many of them exist in order to be developed for future versions of SLE, Leap and Tumbleweed. There should be no expectation of 'quality' at all. They build, they are published, and people use them. That is risky, and yet people do it every day. - There are *millions* of other open source third party packages. These also have no testing, they are not integrated with our distributions. They might not even go through the most basic of checks which OBS does as part of a build. Lots of software languages now have their own package managers and repositories which effectively can 'sideload' software onto your machine, bypassing your system package manager (npm, gems, etc). People don't care, want to do something, and use them anyway. That is even more risky, and yet people do it every day. - Offerings like CoreOS/atomic/Containerisation all try to offer solutions to this, but the reality is they are far far away from being a comprehensive fix. Tools like Machinery http://machinery-project.org/ can identify unpackaged files and changes to files from packages, and speaking from experience, the situation out there in the real world is a messy, ugly place full of local hacks, forgotten changes, rogue software, and mess - In addition to the thousands of OBS and 3rd party packages out there doing god knows what to the machine, at the end of the day, users are human. And humans are fallible. People screw up. Even a 100% perfect quality Linux distribution can be easily ruined by one 3rd party programme or one wrong command typed by one mistaken user. Good backups are great in disasters, but mistakes happen every day. You need system rollback regardless of how good the software quality is. The real world demands it. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hi thank you mister Brown Robert ;-). I would first like to point out what I think about or to your short list of points, since that is easiest to refute or talk about in the context of what I had said before or the argument I made. Op 25-3-2016 om 09:41 schreef Richard Brown:
1) The availability of rollback functionality for users 'systems-in-the-field' has no real world impact on the development pace of release-based distributions 2) Software Quality in release-based distributions benefits from longer release schedules 3) In the absence of software release schedules (ie. in a rolling release), automated testing can provide proactive assurance that all key use cases are not broken by rapid development processes 4) Tying that automated testing as a gate that must be passed before any software releases slows development to ensure quality does not suffer 5) Using that same automated testing gate improves software quality in release-based distributions also 6) System rollback with a rolling release is an extra safety net which helps ensure users don't suffer from any deficiencies in the automated testing 7) System rollback with ANY distribution is, in my mind, essential because there are thousands of ways a user or 3rd party software can ruin a system.
First of course at this point you equate robustness to the robustness of the distribution as a whole and as an integrated thing, but I'm not sure you're also talking about the software packages individually, since normally you'd not be greatly responsible for them. 1. I had not ventured that rollback had any impact on pace. In fact, I had said that a sufficiently fine-grained and usable system (I do not think Snapper is really usable for that) would increases the general pleasure people could have in using Linux, which might enhance the user experience of the environment, which in turn could generally make people more productive and happy. I was making no allusions to the actual PACE of any distribution's development, as you would really need to specify what you mean by that and how that is anything that relates to quality whatsoever. 2. Sure, of course. 3. I can't say automatic testing is a bad thing. Although I hate writing unit tests myself in the Java language due to strange structure they must have, I do agree to the joy of seeing them run and succeed ;-). I have also used them successfully in simple Bash scripts where things can break so easily everything seems to be made of broken porcelain already. This form of automation is amazing of course and it seems like a really great system. I did not know it was so extensive before. Nevertheless it is clear from actual people's experiences that many things go wrong in software packages - maybe not between, but certainly inside of them. I mean, you just have to look at this list. I see reports of things not working in Chrome and in Firefox and as a current Windows user, I never have to deal with that. Your allusions are to the quality of the distribution and that's fine. But I consider Linux itself very broken because I simply know that the moment I install it again (and I might do it shortly just to be able to use Calligra :p) my life will become more painful. My current version of Windows is a pain but it is not nearly going to be as Painful as Linux. No matter what version I try or what distribution I use. I am going to be struggling with commands I don't know the syntax of. I am going to try to repeat tasks I have done before but since forgotten. I am going to be confronted with a great lack of menu-based interfaces on the command line -- the way "make menuconfig" was always excellent, but there is not much like it today. My favourite is the program IPTRAF, it just works. I use a firewall called Vuurmuur somewhere, it works okay too. I always focus on the user experience of something. The moment I write some script or program, I will go to great lengths to make it as easy for people as possible. And also for myself, of course. That may take a lot more time, but in the end that is really insignificant when you compare how much benefit it gives you. I once came across some open source package where the maintainer or developer said: there is no documentation, /because this is free software/. /And because we are developing this in our free time, we have no time to write documentation/. That was just such outstanding nonsense it hurt the eyes to hear it ;-). There is no requirement whatseoever to force all your attention into development, and of course OpenSUSE has pretty good documentation I guess. You can say all you want about how good your system is and come up with metrics to prove it. But my experience is still the same. I also think that OpenSUSE is more Robust than say Kubuntu or even Ubuntu, probably. OpenSUSE has a more robust feel to it from the get go, I mean that is a given fact to me. I am not saying that means everything is better, but as a small example: - there are hardly any third-party Ubuntu repositories (launchpad) from 3rd parties that are actually worth something - there is a great number of usable and helpful 3rd party repositories for OpenSUSE that can allow you to upgrade something to a newer version. Just a simple thing but clearly indicative of how it is. Nevertheless we must differentiate between at least 3 things: - a distribution is not the software itself mostly - there are a million million things that can go wrong in any piece of software that perhaps *could* be tested as clearly and robustly as OpenSUSE is being tested, but experience just dictates that a lot of errors, bugs and difficulties make it through in Linux that are almost always absent from Windows. - when I talked about robustness I meant mostly the distribution stuff you talked about, which clearly informs the developer of a great many things that can go wrong, but also equally I was talking about crashing software and all of that. Now, granted, a full snapshot rollback would normally indicate something wrong with the package system or some system configuration. Nevertheless, although I feel all of this feedback is great in diagnosing problems, they are no excuse or replacement for a full system understanding of what you are doing, and amazing design architecture or architecture design that warrants that you make software that is less bug-prone to begin with. I was once watching a bit of the stuff on a mailinglist going on about the development of Plasma. Those messages that I saw, they were speaking of hella complex software. You know, you can usually directly perceive not just how complex something is, but also how complicated. I like that there are two words for it. Complex means many components can be working together but it is still elegant. Complicated means the opposite: it is a lack of elegance. When something is utterly simple, it can also be utterly complex, because the building blocks are so well defined. The better your building blocks are, which means they are elegant, simplistic, well-designed, clear boundaries, clear concepts, then the systems you can build with it become increasingly big without losing that elegance. However, when something is complicated, it means it requires effort to understand it. A real complex/simple system is effortless to understand. A complicated system always requires hard work. And those Plasma coders, what they showed me was complicated, not complex. A bug fixed left introduced a bug fixed right, that sort of thing. And I would be very happy if my system never broke due to your testing, I guess. But happiness is not the real good word. Happiness in my case results from being around what I like. The system might not break, great, wonderful. But here is the third part: - a system that is just robust due to automatic testing feedback, is not the same as a system that is robust because of great design principles. The first is a mechanism and a process. The second is.... a choice. You can say the first is a choice as well, the choice to not break anything. But the latter is the choice to make something that /cannot/ break. So what I'm saying here in response to your part 3 is that. Yes it is good. But. It seems to be about reactively fixing mistakes, rather than proactively design something good. It is basically bug chasing what you do. It is not leading. It is following. Debugging is always like that, but you prefer to write software that has fewer bugs so you won't need to work so hard before it compiles ;-) (for instance). What I mean by that and I'm getting a bit tired. In relation to rollback systems. You're saying that it doesn't matter because your systems are robust /anyway/. It is like you have prevented yourself from ever making a mistake, because you'll be caught by your jarretels - I mean something else ;-) -- when you do. What I myself was alluding to was a change in incentive. Perhaps you might say, incentive is irrelevant, because incentive stays the same: we are always committed to delivering unbroken systems and we have put in place the measures to ensure that. Yet, this mechanic cannot replace what the heart does. Is all I say now. And the reason of course is that robustness also extends into user friendliness, usabiity. Perhaps this is at odds because rollback should indeed refer to a distribution and its package system (for instance). And now I'm suddenly talking about the quality of individual packages. And of course you can break stuff. But if the system is perfect, you won't need to roll back. And it just doesn't feel that way. This is nice in principle but. Reality is different. 4. Well you are right about that Or I assume you are. Just one experience with QA systems. When I was working in some factory they produced air filters for airco systems. My direct boss or supervisor wanted me to work really fast. But next door was the QA team and they had other ideas, and sometimes sent stuff back. It was like trying to serve two bosses at once, God and the Devil in that sense. And I was torn, I wanted to do high quality work, but for my direct boss quality was only measured by the number of items they did not complain about. If it passes, it is good, even if I didn't feel that way. More even I hated being hated on by the QA team. I did not consider my work being sent back a good way to ensure good products. I wanted to make something good right from the get go, not as a result of shitty behaviour being returned to me. So yes it slows it down. You are obviously and probably very right. But it is the wrong way to go about things, for me. I don't create something shitty and then throw it at the door until it is not shitty anymore. I create a thing of beauty right away, and I don't depend on outside factors to judge my work. If you are working with a reactive system like this, the end result is only as good as the system of tests you run, and absolutely no better than that. That's not creative and it doesn't lead to better designs. It just leads to more fixes of poor designs, in that sense. I like to live at the cause of my experience, not at the result of it. At the start of it, not at the end. As the creator, not as the respondent. 5. Sure, but still not for the right reasons and still as a mechanical thing instead of something that requires or fosters a vision of what a good software product is going to be. It is like they say natural selection is purely the result of natural mutations. I don't think that's true. I think in an important way, creation or evolution follows a plan. But regardless. If you don't give direction to what you do out of a desire to create that perfect system in a sense, but you are content with just fixing the errors that pop up with doing half a man's job, you will never create something outstanding. 6. That's fine. But like I say, it is still with not enough of a mind as to what you are doing. Cause this thing. You can do this while drunk and partially asleep. I know how it works. It is a bliss in a way, but you just habitually run a test suite because you are too tired to think (for instance) so instead of really knowing what you're doing, it doesn't really matter because if you happen on the correct solution by happenstance, it is okay too. And once it passes the test you are like, okay fine, I'm done. And you may create stuff that works according to the tests. But the tests were created by someone with an idea of what the system needs to do. Those tests embody that person's mindset. So the system will do exactly what that person wants, and no more. So where is the creativity? Where is the innovation? And where is real software quality? I just don't think this is an environment that really favours or fosters thinking about real quality. Just like with my boss, the job was to deliver as shoddy a work as possible while still passing the test, because THIS MAXIMIZED MY PRODUCTION. He didn't care about quality, that was not his job. He cared only about output. So you say the development PACE is still very high and somewhat slowed down. Okay. But I was not talking about pace. Pace is quantity. I was talking about quality. And I think and pretty much know for sure this thing is the same with OpenSUSE or any other distribution that does this well. The goal is to maximize throughput with as poor a product as possible that still passes the tests. And that is WHAT I was saying: features (Y(f)) but not quality (Y(r)). The energy you spend on quality is the minimum that will pass the test, because M = Y(f) + Y(r), which means that if you minimize Y(r), that will maximize Y(f), and hence your throughput. In the other email I said that you could also see benefit as a product of quantity and quality. What you do is find the lowest quality that will pass the tests, which means you can direct more effort at quantity and as such speed up the process of development. Expediency, as we have here called it. But I was alluding to two things: - expediency is naturally favoured by an enjoyable work environment - quality however has to compete with quantity, and rollback systems tend to favour quantity. I know you mr. Brown Robert may feel as though you are on fire. Your systems are in place, everything runs like a train. Oiled like a good machine. Let's keep going man!. And this favours OUTPUT. This favours PACE and in that sense, yes, expediency if we have to use it in that way. It is a more complex term of course. You set a certain quality level, use the minimum amount of energy to reach that, and then spend everything else maximizing the output given that quality level that you need. This is just exactly what I was saying. Except that I was tying rollback into that. I was tying the idea into that that quality could 'even be lower' considering a rollback solution that is very effective and fast and easy to use. You set a certain platform that you want to reach and you are content when you reach it. No matter if you expend maximum effort in maintaining that. It can also mean indirectly that the person designing the platforms, is also content with less in a certain way. After all, it can keep the pace going right. Why worry about details so much? We are on the GO! ;-). You cannot install creativity in the process if reaching QA goals is your goal. There will be no growth. No identification of issues. No dreams about betterment. No creative appraisal. If QA is the goal and the metric, then why do you need anything else right. No need to be critical etc., that's the job of QA, not you. Your job is just to produce. 7) System rollback with ANY distribution is, in my mind, essential because there are thousands of ways a user or 3rd party software can ruin a system. That is rather Linux centred of course. And while it is essential, I have never had it!!!!. And I still don't want to have it really. It doesn't feel right to me. For an example. When I roll back my system to a previous state, and this is not really an example now but just what happens to me, I lose track of what my state is. If I had made changes since the last snapshot, I don't remember them or remember to do them again, because in my mind they are already done. With a backup it's not so bad, because I make it consciously. I really hate it when the state of some software product I'm working on becomes inconsistent in my mind. I don't know what's what anymore. It might take weeks going through the source before I know where I'm at. In the meantime, I don't feel happy. I feel uncertain about the product. It doesn't feel like my own anymore. My most important project ended up in that state, partly due to Git I think, partly due to a mistake I made with an overlay filesystem. In Kubuntu you have aufs. It is a risky thing. I was writing changes and I didn't realize they were written to the overlay and not to the filesystem that the source was coming from. Stuff like that can really mess up the congruent state of something, at least in your mind. That product is still in a bad state. I had lost text I had written which linked to a girl I liked, really stuff like that. Due to Git I think. I lost some work. I had done the work while sitting next to her. Talking about it. Losing the text means losing the connection. Man. And now some work is lost that was important to me. And I can't redo it, because I did it with her next to me. You know, stuff like that. Sure if you can pick a file out of a snapshot, you might be very happy. But then it's STILL messed up. I'm fine with backups. I'm not fine with snapshots. I just really really don't like it. It messes me up. The example was that I reverted Windows to some previous state. I hadn't realized yet I could access files in the snapshot directly. The snapshot restore messed up some folders because Windows does a bad job at it. Then I had a manual job getting that fixed again. And after, I still don't know if everything is alright. This uncertainty about the state of a system or project doesn't happen when I make a real conscious backup. It does happen with forms of snapshotting to me. Call it weird, call it strange. It has that effect on me. A total system snapshot is too big to comprehend. Well, I don't know. After that SINGLE unimportant system restore, I seriously wanted to install windows again. It had fucked it up for me. I didn't enjoy it anymore. I don't even like loading save games in a computer game these days. I prefer doing the level over from scratch. Feels better. Even if it takes longer. It feels better. I remember how e.g. the experience of Zelda 1 on the NES got ruined when you played it in an emulator, because of the save states. Same thing. A save state is really the same thing. Pretty much almost. Using save states, you can defeat any encounter, because.... BECAUSE if you mess up, you just go back 2 seconds, and try again!!!!. No longer any long spell of concentration required. Hence, no longer any concentration. Just rewind a million times if you have to, you only have to perform in 5 second chunks anyway. But the old days. In the NES itself. You had to play flawlessly for like half an hour. That was the real stuff.
I'll expand on these individually
1) Look at openSUSE. We've had snapper/btrfs in the distribution for years. We've had it as a default since 13.2. And yet the trend for openSUSE releases has been to *extend* the release cycle, from the previous 8 months to the current 12 months. Just because we have snapper by default doesn't mean we speed up software development
2) I think can be accepted without too much argument. The longer something is developed, the more testing, the more time and effort we expend on polishing it. The balancing act here is ensuring that when you actually release the final product is still up-to-date enough that it is interesting and useful to the people who want to use it. This was one of the key motivators for the direction Leap is taking (More stable/unchanging as a general goal, and using an Enterprise codebase to achieve that) while we have Tumbleweed as the counter balance without a release schedule.
3) openQA really is magic. Every single build of Tumbleweed gets tested with over 100 different scenarios before it is released. Software RAID, encrypted lvm, dual booting with windows 8, filesystems, KDE, GNOME, lvm with RAID 1, memtest, minimal X, split /usr, textmode, uefi, uefi with secure boot, uefi with USB booting, updating from 12.x and 13.x, network installs, live CD's, and more are tested automatically, with image and log based automatic assessment of the results. ie. When Tumbleweed ships, openQA knows that every screen it cares about *looks* the way we want it to look for a user, and every command it typed *acts* the way we want it to act. That is a broader coverage ensuring the functionality of our Linux distributions than most corporate manual QA departments can manage with several weeks of human testing..and openQA does it every day..sometimes twice a day..
4) Tumbleweed uses openQA as an integrated part of the software development process. Even before any new package hits any distribution, incoming submit requests are 'staged', the Build Service makes 'what-if' DVD's that contain the changeset from the submit request, and then openQA does brief testing to ensure the OS is still valid. If it fails, the package isn't allowed anywhere near any of our distribution repos. Then only when it is accepted, full system validation kicks off with the breadth I described in 3).
5) For Tumbleweed, such a process as 4) is necessary in order for the rolling release to be viable, but it's proven itself so effective at ensuring quality it's also used not only by Leap but also by the SUSE Linux Enterprise development teams. Even with a 'traditional' release-schedule which provides time for manual QA, it's beneficial having a constant picture of hundreds of different installation, configuration, and production scenarios. openQA can keep track of that picture for every single development build, not only Milestones, like Alphas and Betas which undergo manual testing, but all the intermediate builds that occur as things rapidly change in each distributions OBS Projects.
6) So, yes, in one sense you're right. Tumbleweed moves fast, and only relies on magical automated testing, and so system rollback is a pretty good idea for Tumbleweed users incase something slips past the magical automated quality testing robot that is openQA.. that said, as an avid Tumbleweed user, I have to admit that in the last 2 years the only time I've had to use snapper is when *I* have screwed up my machine doing stuff that *I* should not have done..so I am more dangerous than a rapid rolling development model..
7) End of the day, the discussion of quality is actually mostly irrelevant when it comes to a discussion about snapper
- There are ~2500 packages in SUSE Linux Enterprise - There are ~7500 packages in openSUSE Leap and Tumbleweed - There are *tens of thousands* of packages in OBS. These have no testing. These are not integrated with our distributions. Many of them exist in order to be developed for future versions of SLE, Leap and Tumbleweed. There should be no expectation of 'quality' at all. They build, they are published, and people use them. That is risky, and yet people do it every day. - There are *millions* of other open source third party packages. These also have no testing, they are not integrated with our distributions. They might not even go through the most basic of checks which OBS does as part of a build. Lots of software languages now have their own package managers and repositories which effectively can 'sideload' software onto your machine, bypassing your system package manager (npm, gems, etc). People don't care, want to do something, and use them anyway. That is even more risky, and yet people do it every day. - Offerings like CoreOS/atomic/Containerisation all try to offer solutions to this, but the reality is they are far far away from being a comprehensive fix. Tools like Machinery http://machinery-project.org/ can identify unpackaged files and changes to files from packages, and speaking from experience, the situation out there in the real world is a messy, ugly place full of local hacks, forgotten changes, rogue software, and mess - In addition to the thousands of OBS and 3rd party packages out there doing god knows what to the machine, at the end of the day, users are human. And humans are fallible. People screw up.
Even a 100% perfect quality Linux distribution can be easily ruined by one 3rd party programme or one wrong command typed by one mistaken user. Good backups are great in disasters, but mistakes happen every day. You need system rollback regardless of how good the software quality is. The real world demands it.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 25/03/2016 09:41, Richard Brown a écrit :
Ooh a thread about development expediency and quality... I wonder how much hate I'm going to get for sharing my feelings on this topic :)
why should you :-)
On 25 March 2016 at 08:31, Xen <list@xenhideout.nl> wrote: <a lot of stuff about quality and development expediency>
You make some very interesting points which sound very reasonable, but I disagree with most of them
good start :-) I read all, but too long post are at risk not to be read (I gave up in front of Xen ones, sorry). You need system rollback regardless of how good the software
quality is. The real world demands it.
I mostly disagree on this conclusion. you said yourself you used rollback only once. Was it that urgent? was the damage so hard it couldn't be debugged in some minutes? I disabled entirely the snapshots on my btrfs root. The extra size needed by default install is incredible. With 50Gb I was blocked by this system, when I use only 11Gb on a very loaded Leap root. When I used them, none of my problems could be solved by them (didn't start better). In fact most if not all serious problem I have are caused by me and for this there is no roll back. AFAIK, none of my essential installs had major problem for years (I love openSUSE :-)), so why I still have 13.1 on my servers, never any need to reinstall :-). and there is a last reason. If ever one have to make a rollback after an update, what is his system good for? what is to be done with next update? I remember a recent advice "do not update to kernelXXX". But then what to do? no advice "now you can". when you have a problem after 3 years of uninterrupted service, it may be time to go to fresh system :-) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 25 March 2016 at 13:08, jdd <jdd@dodin.org> wrote:
I mostly disagree on this conclusion.
you said yourself you used rollback only once. Was it that urgent? was the damage so hard it couldn't be debugged in some minutes?
I've used it a few times..2..maybe 3, in the last two years in one case - zypper, yast, bash, and zsh were gone.. yes, I didn't have much other choice in another, I was less than 5 minutes away from going on stage and giving an important presentation, which included demonstrating how openSUSE worked.
I disabled entirely the snapshots on my btrfs root. The extra size needed by default install is incredible. With 50Gb I was blocked by this system, when I use only 11Gb on a very loaded Leap root.
When I used them, none of my problems could be solved by them (didn't start better).
In fact most if not all serious problem I have are caused by me and for this there is no roll back. AFAIK, none of my essential installs had major problem for years (I love openSUSE :-)), so why I still have 13.1 on my servers, never any need to reinstall :-).
and there is a last reason. If ever one have to make a rollback after an update, what is his system good for? what is to be done with next update? I remember a recent advice "do not update to kernelXXX". But then what to do? no advice "now you can".
when you have a problem after 3 years of uninterrupted service, it may be time to go to fresh system :-)
snapper isn't just about rolling back because of package updates and package problems - it takes snapshots then because that is clearly a point of risk, but there are many, many other ways files in the root filesystem can be altered in a way that will lead to behaviour on your system you do not want btrfs and snapshots really are the way to go to avoid that. (SIDE NOTE: there has been a few comments about the 'granularity' of snapper that suggest that people do not know just how well it works. Unlike LVM and other blockbased snapshotting tools, btrfs snapshots are totally and utterly 'diffable' The snapper CLI and the YaST snapper tool let you compare, diff, and selectively rollback specific changes to any specific files between snapshots You can't really get more granular than that, and it also means snapper has an awesome secondary role as a diagnostic tool eg. You think a package install is missbehaving? No problem, do a diff between the pre and post snapshots on any package install and zypper will be able to tell you EXACTLY what that package install did to every file on your system) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 25/03/2016 13:21, Richard Brown a écrit :
The snapper CLI and the YaST snapper tool let you compare, diff, and selectively rollback specific changes to any specific files between snapshots
You can't really get more granular than that, and it also means snapper has an awesome secondary role as a diagnostic tool
eg. You think a package install is missbehaving? No problem, do a diff between the pre and post snapshots on any package install and zypper will be able to tell you EXACTLY what that package install did to every file on your system)
smart. Good candidate for show, experiments, learning curve :-) Linux move so fast, it's difficult to cope with (think of kde4, systemd, btrfs, now snapper...) time to have some sort of MOOC https://en.wikipedia.org/wiki/Massive_open_online_course thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Mar 25, 2016 at 6:21 AM, Richard Brown <RBrownCCB@opensuse.org> wrote:
(SIDE NOTE: there has been a few comments about the 'granularity' of snapper that suggest that people do not know just how well it works.
If I do an update, OS and application binaries are all conflated into one event. This isn't related to Btrfs or Snapper, but neither solve this problem either. The rollback mechanism doesn't help me figure out whether the problem is system or application related. I get to rollback both or neither. And the package manager doesn't make it at all obvious how to downgrade the application to the previous version, or two versions back. [1] So really the problem I'm having with this arrangement is extremes in granularity. There is insufficient granularity separating OS and apps. But there are four snapshots for the single downgrade event I did in [1] and there are changed files in each of those snapshots, so I really have no idea what each of those things are, it's too much information to sort out and make a decision from. It's immensely easier to downgrade on OS X where I just go find the older version disk image file on mozilla's web site, drag and drop uninstall the new version, and drag and drop install the new version. Or I can even rename the new one and both can coexist in /Applications at the same time, without conflicting with each other, without affecting any other binaries on the system. It's 8000% better Ux. I'm hopeful xdg-app offers such self-contained bundled that work similar to this, except better, and are more portable across the distributions. [1] Install Tumbleweed 20160307 YaST > Online Update shows no updates. Click Search tab, type in firefox, click Search button, click on Versions tab. 44.0.2 is installed, 45.0-1.1 has a radio button on it, but it won't install when I click Accept. *shrug*. OK so go back and click on 44.0-1.1 under that, click Accept, and there's a list of Automatic Changes which shows a bunch of files about to be changed that have nothing to do with Firefox, e.g. libwayland-server0, libuuid1, libz, libopenssl, libncurses, so I click Continue and in fact a bunch of those things are downloaded and ostensibly installed. Now I click on Snapper and I see 44 items even though openSUSE was installed just a few hours ago and no updates have been done. I have no real good way of figuring out -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op 25-3-2016 om 13:21 schreef Richard Brown:
(SIDE NOTE: there has been a few comments about the 'granularity' of snapper that suggest that people do not know just how well it works.
Unlike LVM and other blockbased snapshotting tools, btrfs snapshots are totally and utterly 'diffable'
The snapper CLI and the YaST snapper tool let you compare, diff, and selectively rollback specific changes to any specific files between snapshots
You can't really get more granular than that, and it also means snapper has an awesome secondary role as a diagnostic tool
eg. You think a package install is missbehaving? No problem, do a diff between the pre and post snapshots on any package install and zypper will be able to tell you EXACTLY what that package install did to every file on your system)
That's good to hear, I just haven't used btrfs myself because I just don't like it. The problem I have with people in this thread (not you) is that when I say "I don't like it" and give some of the reasons why (because people ask, for instance) other people will go and attack those reasons as if they need to change what I like and don't like. Maybe I misunderstand people but. If you don't like something, you don't like it. If you like something, you do like it. I mean how hard is it. How many times have we really experienced being wrong about something? You like the sound of Romania, you go visit Romania, will you be disappointed? You don't like the sound of Bazril, you go visit it, will you be disappointed in that(?). A pleasant surprise -- sure it happens. But being confirmed in what I knew beforehand is much more common in my life. I don't get why people are not entitled to their opinion, or their appraisal or feelings. Maybe I am not "modest" enough to be accepted ;-). My life usually consists of a neverending stream of experiences that tell me I was a fool for not listening to my intuition. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Mar 25, 2016 at 2:41 AM, Richard Brown <RBrownCCB@opensuse.org> wrote:
- Offerings like CoreOS/atomic/Containerisation all try to offer solutions to this, but the reality is they are far far away from being a comprehensive fix.
The #1 thing I like about Fedora atomic and CoreOS is the versioned state of the OS itself. If I have 23.79 and you have 23.79, we have the same OS, however that ends up being defined. Right now, package managed systems are next to non-deterministic what package versions they have across systems. There's no practical way to get an entire user base on the exact same version of everything, and then on flag flip them over atomically to a complete set of updated versions, rather than some packages being the new version and others being the old version, depending on what mirror they connect to and at what time of day. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Mar 25, 2016 at 1:31 AM, Xen <list@xenhideout.nl> wrote:
It seems you have a problem with people giving their opinion.
I have no problem with opinions, even excessively verbose opinions that constitute bombarding the list with such a superfluous volume of material that no reasonable person would read it. Many people confuse correlation with causation. But you've shown neither. Your entire argument rests on "saying ridiculous things makes them true."
Windows' update mechanism (I take it you mean Windows Update) has nothing to do with my argument as it is neither to do with the disaster recovery (if that were a reason) nor does the update mechanism say all about the OS. So your argument doesn't follow. The shoddiness of the OS (that you contest, with no good reason) and its evolution was posited as a result of not having to make good software because you can revert.
You didn't actually qualify how Windows 10 quality or reliability is lower than Windows XP, you haven't made any argument that ties it to the ability to revert, which has always been possible in one form or another. Hardware driver and kernel reliability has improved immensely between XP and 10, it's a significantly more stable operating system with far fewer instances of kernel panics / blue screens. I've seen no change in this regard on OS X, which is also contrary to your assertions. For your hypothesis to be true, we'd expect OS X quality to increase due to the lack of reversion. Both OS X and Windows have suffered regressions in bundled application software quality if the mood and assertions of various forums is trustworthy (they're not scientific samples so in no way can this be an impartial analysis).
Granted, there may be many more reasons for Microsoft's recent change in how they develop their software. But they've also gone in the direction of Linux ;-). The release cycle has become much smaller (faster) and current day Windows versions are released with the idea of "We'll fix it later". If that doesn't sound like "If something goes bad, we'll just tell them to use System Restore" I don't know what.
You don't know. That much is certain.
] And logically, it is a pretty safe argument that when a system restore is in place, the requirement for software to always function as well, becomes less.
No it's not. You've made no connection whatsoever. It's just an assumption that you expect everyone to accept on the face of it. You've provided no mechanism how reversion alters a totally orthogonal software quality assessment. Your style of argument is, "the sky is blue therefore clouds are white" and you've put a layer of "DUH!" on top of that. It's a stupid argument and no one should buy it.
It is a pretty simple argument you know.
It's pretty simply unconvincing. And I'm not reading any more of it, I'd rather watch water boil than read anymore of this. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 25/03/2016 18:09, Chris Murphy a écrit :
Hardware driver and kernel reliability has improved immensely between XP and 10, it's a significantly more stable operating system with far fewer instances of kernel panics / blue screens.
I don't know how you can prove that. the last few days, I had probably 5 Windows 10 crashes, that is more than one fore each hour of use, on my new windows 10 tablet, but instead of a blue screen one have now a crash report lasting long before releasing. In this respect I had less problems with Windows 7. But I don't use this system sufficiently to make statistics. I know nothing about recent development system of Microsoft, but I guess they may have also an automated system and, for sure an immense beta tester base (I was one of them in an other life :-)
"If something goes bad, we'll just tell them to use System Restore" I don't know what.
I *never* on more than 20 years use seen Windows repair system repair anything, even problems I could fix myself in minutes. To restore you have to get a restore disk, and pretty often Windows refuses to make one. last week, Windows 10 insisted to use dvd on the same tablet, which of course have no dvd (I was happy to have an usb dvd writer). Not possible on sd card nor usb stick... That said I neither could restore a linux system after a crash (no problem for data). It was much faster to reinstall the system than to fix the handful of setup needed by the hardware change by the way I didn't read anything about SSD (see subject), but noticed on new install a "ssd" option in fstab for file systems jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op 25-3-2016 om 19:51 schreef jdd:
"If something goes bad, we'll just tell them to use System Restore" I don't know what.
I *never* on more than 20 years use seen Windows repair system repair anything, even problems I could fix myself in minutes. To restore you have to get a restore disk, and pretty often Windows refuses to make one. last week, Windows 10 insisted to use dvd on the same tablet, which of course have no dvd (I was happy to have an usb dvd writer). Not possible on sd card nor usb stick...
That said I neither could restore a linux system after a crash (no problem for data). It was much faster to reinstall the system than to fix the handful of setup needed by the hardware change
by the way I didn't read anything about SSD (see subject), but noticed on new install a "ssd" option in fstab for file systems
I meant the shadow volume copy service that makes a snapshot. You don't need to create rescue disks for that. Windows has been doing it since XP I think. You can browse these snapshots like Snapper does I guess, but Snapper is probably much more advanced in that sense (not necessarily more usable). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2016-03-26 at 00:50 +0100, Xen wrote:
by the way I didn't read anything about SSD (see subject), but noticed on new install a "ssd" option in fstab for file systems
I meant the shadow volume copy service that makes a snapshot. You don't need to create rescue disks for that. Windows has been doing it since XP I think. You can browse these snapshots like Snapper does I guess, but Snapper is probably much more advanced in that sense (not necessarily more usable).
It's something done on NTFS partitions, and it is little known. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb2fJUACgkQtTMYHG2NR9XfMwCbB0JdunvQtYbBIzlCMP4/8x6J rj0An2lDN9GZRIc6x8rZAjUPkkLrZUkj =6e0i -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op 26-3-2016 om 13:11 schreef Carlos E. R.:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Saturday, 2016-03-26 at 00:50 +0100, Xen wrote:
by the way I didn't read anything about SSD (see subject), but noticed on new install a "ssd" option in fstab for file systems
I meant the shadow volume copy service that makes a snapshot. You don't need to create rescue disks for that. Windows has been doing it since XP I think. You can browse these snapshots like Snapper does I guess, but Snapper is probably much more advanced in that sense (not necessarily more usable).
It's something done on NTFS partitions, and it is little known.
I don't know why you are saying that, but okay. The thing is it is not little known at all - now I don't speak for the ignorant masses so to speak. But anyone that knows anything about Windows knows about this stuff okay. In Dutch they are called "herstelpunten" and in English "Restoration points" I think. When you search in Configuration Screen on that thing, you will find the dialog screen where you can make them, browse them, delete them (or all of them) and configure them. There is also a third party utility by the way called System Restore Explorer that does the same things but only better I guess. I haven't used it and it uses the same API, so you can only use it to browse restore points that are still "functional". I once tried to undelete one, it didn't work ;-). Because you know, there are no other partitions in Windows other than NTFS ;-). Not counting USB sticks. But anyway, I didn't mind that you didn't know about it. People just haven't been making rescue disks for a long time...... Anyway, kudos, Bye.
- -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux)
iEYEARECAAYFAlb2fJUACgkQtTMYHG2NR9XfMwCbB0JdunvQtYbBIzlCMP4/8x6J rj0An2lDN9GZRIc6x8rZAjUPkkLrZUkj =6e0i -----END PGP SIGNATURE-----
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday, 2016-03-26 at 21:38 +0100, Xen wrote:
Op 26-3-2016 om 13:11 schreef Carlos E. R.:
It's something done on NTFS partitions, and it is little known.
I don't know why you are saying that, but okay. The thing is it is not little known at all - now I don't speak for the ignorant masses so to speak. But anyone that knows anything about Windows knows about this stuff okay.
In Dutch they are called "herstelpunten" and in English "Restoration points" I think. When you search in Configuration Screen on that thing, you will find the dialog screen where you can make them, browse them, delete them (or all of them) and configure them.
Not restoration points. ..... Shadow Copy (also known as Volume Snapshot Service,[1] Volume Shadow Copy Service[2] or VSS[2]) is a technology included in Microsoft Windows that allows taking manual or automatic backup copies or snapshots of computer files or volumes, even when they are in use. It is implemented as a Windows service called the Volume Shadow Copy service. A software VSS provider service is also included as part of Windows to be used by Windows applications. Shadow Copy technology requires the file system to be NTFS in order to create and store shadow copies. Shadow Copies can be created on local and external (removable or network) volumes by any Windows component that uses this technology, such as when creating a scheduled Windows Backup or automatic System Restore point. .... <https://en.wikipedia.org/wiki/Shadow_Copy> I did not say I didn't know about it...
Because you know, there are no other partitions in Windows other than NTFS ;-). Not counting USB sticks.
No, not true. There was FAT. I have seen more XP machines on FAT than on NTFS. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb4HXYACgkQtTMYHG2NR9WtnACfZQl6cV8lacK2WeFsgNLlNX9X YPwAn2U44DUCk+erqjSNqWo1AA9VOhPq =ClFX -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/27/2016 10:50 AM, Carlos E. R. wrote:
No, not true. There was FAT. I have seen more XP machines on FAT than on NTFS.
Don't forget ReFS Its sort of still NTFS underneath, But it seems like its a BTRFS wanna-be. -- After all is said and done, more is said than done.
On Mon, Mar 28, 2016 at 11:28 AM, John Andersen <jsamyth@gmail.com> wrote:
On 03/27/2016 10:50 AM, Carlos E. R. wrote:
No, not true. There was FAT. I have seen more XP machines on FAT than on NTFS.
Don't forget ReFS
Its sort of still NTFS underneath, But it seems like its a BTRFS wanna-be.
It's using the NTFS ioctl (or API or whatever they call it on Windows) but it's definitely something new. It has a new on-disk format. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
John Andersen schreef op 28-03-2016 19:28:
On 03/27/2016 10:50 AM, Carlos E. R. wrote:
No, not true. There was FAT. I have seen more XP machines on FAT than on NTFS.
Don't forget ReFS
Its sort of still NTFS underneath, But it seems like its a BTRFS wanna-be.
All of that is irrelevant here. If you install Windows 8 or 10, you don't even have the option to create a FAT partition. FAT is long gone, except for usb sticks and the like, which is what I indicated. I don't think Windows even mentions that it is using NTFS. This means the shadow volume copy thing is available for 99.99% of Windows users these days on their main disk at least, and it is also turned on by default for the main disk. And in a general sense, there is nothing special about that, I was just mentioning it (I guess) as a reference or example or point in case of another OS that also has it. No point to go into that, it is a base feature and it works as intended (well, most of the time) and it has a gui to it and you can browse the snapshots, etc. etc. For a windows user there is really no difference between "snapshot" and "system restore point". The system restore points are the only snapshots you normally have access to. Regards. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/03/2016 01:27, Xen a écrit :
I don't think Windows even mentions that it is using NTFS.
yes, it does (in disk properties)
This means the shadow volume copy thing is available for 99.99% of Windows users these days on their main disk at least, and it is also turned on by default for the main disk.
no it's off by default and not that easy to find (search for "restore")
No point to go into that, it is a base feature and it works as intended
it's an old feature and I now remember why I don't use it. Windows system, basically, do not separate root and home disks (partitions are disks for windows), so a restore point may also include users data. I spent (at the time) much time working, often, before noticing a problem and I don't want to lose my work made since the last restore point no idea if it's still relevant. But it's not relevant with the separate /home we have on openSUSE jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 29-03-2016 8:43:
Le 29/03/2016 01:27, Xen a écrit :
I don't think Windows even mentions that it is using NTFS.
yes, it does (in disk properties)
I meant while partitioning and formatting.... ;-?.
This means the shadow volume copy thing is available for 99.99% of Windows users these days on their main disk at least, and it is also turned on by default for the main disk.
no it's off by default and not that easy to find (search for "restore")
"System protection is turned on by default on the hard disk that Windows is installed on." From a Microsoft website, and also my recollection. No, you're wrong, and it is turned on by default. I don't get why you people are making this into a point of discussion.
No point to go into that, it is a base feature and it works as intended
it's an old feature and I now remember why I don't use it.
Windows system, basically, do not separate root and home disks (partitions are disks for windows), so a restore point may also include users data.
Actually it separates user and program files based mostly on extension. Any file, apparently, that has an extension that is recognised as belonging to a document, is skipped by the whole system restore thing. A system restore will not be able to recover your files, conversely, it will also not overwrite them when you do it. This only works for "document" files. The thing is only meant for returning Windows to a working state.
I spent (at the time) much time working, often, before noticing a problem and I don't want to lose my work made since the last restore point
no idea if it's still relevant.
Not sure if it was ever different. Can't tell, I didn't know back then. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/03/2016 08:52, Xen a écrit :
jdd schreef op 29-03-2016 8:43:
no it's off by default and not that easy to find (search for "restore")
"System protection is turned on by default on the hard disk that Windows is installed on."
From a Microsoft website, and also my recollection. No, you're wrong,
sorry, but I just verified on a stock windows 10, may be it depends of the Windows version (mine is "family")
and it is turned on by default. I don't get why you people are making this into a point of discussion.
you did, not me... I guess Snapper is meant to do similar things with BTRFS, but the pre/post system is not very clear for me and I don't have enough room on my main disk to experiment with it :-( jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Mar 29, 2016 at 9:43 AM, jdd <jdd@dodin.org> wrote:
Le 29/03/2016 01:27, Xen a écrit :
I don't think Windows even mentions that it is using NTFS.
yes, it does (in disk properties)
This means the shadow volume copy thing is available for 99.99% of Windows users these days on their main disk at least, and it is also turned on by default for the main disk.
no it's off by default
True for Windows 10; in previous versions it is on by default. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday, 2016-03-29 at 08:43 +0200, jdd wrote:
Windows system, basically, do not separate root and home disks (partitions are disks for windows), so a restore point may also include users data.
I spent (at the time) much time working, often, before noticing a problem and I don't want to lose my work made since the last restore point
no idea if it's still relevant.
But it's not relevant with the separate /home we have on openSUSE
With Win 95 I typically used a different partition for the Documents folder. Suprisingly, it is not that simple to do with current versions; aparently, you have to do it for each user. However, NTFS supports links, I have been told, so that could be a trick to use separate disk or partition for data. Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb7tUcACgkQtTMYHG2NR9XbPQCeNxH3UcOLiPuot/Ejjcch91Tk QDsAn1FGATkRTnnqkXV+ySTV3dkbvENQ =FcvB -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 30/03/2016 13:15, Carlos E. R. a écrit :
Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways.
I use lot of data (photos and video files), so my work is a bit different. When I can I use a different partition for /home, but it's not really important. What is important is to never use the same /home for various distros/installs and I use an other, unique, very large partition for data. /home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth. jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/30/2016 07:20 AM, jdd wrote:
Le 30/03/2016 13:15, Carlos E. R. a écrit :
Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways.
I use lot of data (photos and video files), so my work is a bit different.
I'm sort of like that. My personal life involves a lot of photography (not videos), my professional life a lot of documents, many wikified, and a lot of papers as PDFs or presented as PDF versions of presentations or as e-books of papers or e-books of presentations. So what's under /home/Documents, /home/PDF, and /home/Photographs is very extensive. Extensive enough to be on individual "partitions". They could, given compatible file systems, be mounted for different distribution, if I were running different distributions.
When I can I use a different partition for /home, but it's not really important. What is important is to never use the same /home for various distros/installs
But the idea of having to have different /home/ and hence different /home/anton for each distribution bothers me. Why?
/home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth.
Scary, frustrating, irritating. Why? You don't explain why? -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 30/03/2016 14:07, Anton Aylward a écrit :
On 03/30/2016 07:20 AM, jdd wrote:
/home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth.
Scary, frustrating, irritating. Why?
You don't explain why?
for me seems obvious than when you go from plasma 5 to kde4 on same home (and sometime the other way round in the same day), using the same config file is not that good :-). same for many apps that can update config, but not downgrade it on the fly worst even if there as different distribution. Want to share /home between Debian, openSUSE or Gentoo :-)? I don't. jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 30 Mar 2016 14:07, Anton Aylward wrote:
On 03/30/2016 07:20 AM, jdd wrote:
Le 30/03/2016 13:15, Carlos E. R. a écrit :
Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways.
I use lot of data (photos and video files), so my work is a bit different.
I'm sort of like that. My personal life involves a lot of photography (not videos), my professional life a lot of documents, many wikified, and a lot of papers as PDFs or presented as PDF versions of presentations or as e-books of papers or e-books of presentations.
So what's under /home/Documents, /home/PDF, and /home/Photographs is very extensive. Extensive enough to be on individual "partitions". They could, given compatible file systems, be mounted for different distribution, if I were running different distributions.
When I can I use a different partition for /home, but it's not really important. What is important is to never use the same /home for various distros/installs
But the idea of having to have different /home/ and hence different /home/anton for each distribution bothers me. Why?
/home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth.
Scary, frustrating, irritating. Why?
You don't explain why?
Desktop and App config shared between different distros? You want frustation, ulcers, near permantent anger, yes? Doing so will bring your work to a sudden halt. That will not work. Not even inside one distro between versions (e.g. Leap 42.1 and Tumbleweed). Why: a) inside one distro (spanning versions): - changed version of DE interpret config options differently. - one version as a utility, the other does not. - different options during compile. b) between distros: basically a) on steroids, with added fun. for added "WHY" try to switch between SElinux and appArmor. Ergo mounting your $HOME dir (/home/$USER/) in different distros / versions is NOT what you really want. On my work box I must share a /home/ partion between distros, (Centos 7.2 and Leap42.1) for me the Trick was this: /home/myuser.c7/ ; <- for centos7, with selinux /home/myuser.l2/ ; <- for leap42, with apparmor /home/common/{Documents,Templates,Music,Video,.git,Projects,...} And both users ($USER is 'myuser') have the same $UID and $GID. in both $HOME dirs there are links to the folders in "common" This way I get what I need, without the headaches. As long as configs and caches are not 'versioned', doing the full mix is not healty. Hint: trying to use a "roaming" home between different Windows versions will work partially better due to harder enforcement of versioning in registry entries by Microsoft. But not all apps respect that. YMMV - Yamaban.
On 03/30/2016 12:36 PM, Yamaban wrote:
On my work box I must share a /home/ partion between distros, (Centos 7.2 and Leap42.1) for me the Trick was this:
/home/myuser.c7/ ; <- for centos7, with selinux /home/myuser.l2/ ; <- for leap42, with apparmor /home/common/{Documents,Templates,Music,Video,.git,Projects,...}
I could live with something like that :-) But then again, I'd expect some problems with LEAP since that's KDE5 and Plasma5 and bunch of qt conflicts even between 13.{1,2} and LEAP.
And both users ($USER is 'myuser') have the same $UID and $GID. in both $HOME dirs there are links to the folders in "common"
This way I get what I need, without the headaches.
I would not dream of mixing a RPM based distribution {redhat,openSuse,mageia} with one that is not {Ubuntu and deriviatives}. I've moved, in the past from Redhat to ManDriva/Magia to openSuse with no problem in my ~/home (which migrated with them). I *have* had problems with the stuff that was in /etc/ being copy/saved and restored, such as Postfix, and Dovecot, and of course Apparmor. (and those moved were beforfe the days of systems, though to get Postfix working under openSuse initially I had to steal a unit from the Redhat distribution). I *have* had problems with BtrFS under openSuse bing backward incompatible between kernels. But that's understandable as BtrFS is "rapidly advancing". -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, 2016-03-30 at 18:36 +0200, Yamaban wrote:
On Wed, 30 Mar 2016 14:07, Anton Aylward wrote:
On 03/30/2016 07:20 AM, jdd wrote:
/home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth.
Scary, frustrating, irritating. Why?
You don't explain why?
Desktop and App config shared between different distros? You want frustation, ulcers, near permantent anger, yes?
ROTFL! X'-)
Doing so will bring your work to a sudden halt.
You are absolutely right. Even sharing data might be problematic sometimes Like having a spreadsheet in a version, then opening it in another version that doesn't have all the functions. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb8IukACgkQtTMYHG2NR9V3bACeK4CdLofRNxJj/7Mb9lLuCSe6 X6EAn0bF8e32e8Z5sN2bQpmAMFfAsjJc =LCfJ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 30-03-16 13:20:
Le 30/03/2016 13:15, Carlos E. R. a écrit :
Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways.
I use lot of data (photos and video files), so my work is a bit different.
When I can I use a different partition for /home, but it's not really important. What is important is to never use the same /home for various distros/installs
and I use an other, unique, very large partition for data.
/home is the home of . files, with the applications config. When version change, keeping config is scary, specially if go back and forth.
jdd
I really want to write an email with respect to this but it is getting rather long. I think I will just turn it into an article or PDF and put it only somewhere. Then I will also be able to have some rudimentary indexing and links to the various parts. I wish I had more wiki access. I'm trying to look for alternatives to Wikispaces, but the only one I come across is Wikidot, and all of its wikis are hideous. It is easy enough if you have a wordpress site you can publish on. I have one, or actually several, but I just can't use it to publish anything. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 31/03/2016 12:56, Xen a écrit :
It is easy enough if you have a wordpress site you can publish on. I have one, or actually several, but I just can't use it to publish anything.
why? can't you manage a simple wiki, like line: http://dodin.info/wiki/pmwiki.php I can even give you an account on my server (but you will have to manage apps yourself) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 31-03-16 13:28:
Le 31/03/2016 12:56, Xen a écrit :
It is easy enough if you have a wordpress site you can publish on. I have one, or actually several, but I just can't use it to publish anything.
why?
can't you manage a simple wiki, like line:
http://dodin.info/wiki/pmwiki.php
I can even give you an account on my server (but you will have to manage apps yourself)
jdd
Ha, I remember seeing your site :). But that wiki is as hideious as any other. It is just astounding. Check this site: wikidot.com. Looks pretty good right? Now check its wikis. http://community.wikidot.com http://destiny.wikidot.com http://darksouls.wikidot.com just some random ones I could find. HIDEOUS. Wikispaces was BEAUTIFUL. And then they closed down (introduced stupid pricing schemes). I even tried to replicate their theme in TiddlyWiki and it didn't quite work out perfectly but I still have it. I guess it is still the best looking wiki I have :) :-/. But as to your question: I have a wordpress site but it is so outdated I can't use it. I am now offloading its data into simple text format (individual files etc on harddrive) so that I will be at liberty to prune it when I want to. The less remains, the more free I will be to go in a different direction. But a blog is not a wiki. I have not found ONE good looking wiki outside of Wikispaces that is really a wiki (and not some corporate collab site). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 31/03/2016 22:35, Xen a écrit :
Ha, I remember seeing your site :).
But that wiki is as hideious as any other.
may be you missed the fact that- pmwiki is very customizable my skin is "triad" http://www.pmwiki.org/wiki/Skins/Triad but there are many: http://www.pmwiki.org/wiki/Skins/Skins and it's pretty easy to customize one. that said, imho, you have two incompatible objectives: be readable or be good looking... You know what I choosed.
Check this site: wikidot.com. Looks pretty good right? Now check its wikis.
pretty:
readables...:
just for example, the opensuse.org home page is exactly what should never be done...
But as to your question:
I have a wordpress site but it is so outdated I can't use it. I am now
I once installed a wordpress instance on my server, quite pretty, but very difficult to manage, always spammed, a nightmare to move from a server to an other one... I gave up. I can make on my wiki all what I want: next to no admin work (new version have only to be written on the old one), never spammed, only one passwd for me (and no other can write on it), extremely robust... gps page: http://dodin.info/wiki/pmwiki.php?n=GPS.20141010-cugnaux other skin, same wiki: http://lesgazelles.fr/ very customizable... jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 31-03-16 22:55:
Le 31/03/2016 22:35, Xen a écrit :
Ha, I remember seeing your site :).
But that wiki is as hideious as any other.
may be you missed the fact that- pmwiki is very customizable
my skin is "triad"
http://www.pmwiki.org/wiki/Skins/Triad
but there are many:
You know I am not even going to look at this point. I scourged many wikis at the time. I still think DokuWiki is the best. My theme is not doing because I am so bad with colours but. I will post a screenshot later.
and it's pretty easy to customize one.
Yeah but not for me. Colour-blind === troubles.
that said, imho, you have two incompatible objectives: be readable or be good looking... You know what I choosed.
That's just nonsense. That's a false dichotomy. Good looking usually implies readable.
pretty:
You call that pretty?
readables...:
Readable is usually not a difficult challenge unless you go mess with contrasts. There are basically only two ways to make something unreadable. 1. Create bad contrast with the background, or make the font too small. 2.
just for example, the opensuse.org home page is exactly what should never be done...
That's two: make everything so big you can hardly read anything. It is some kind of hype these days :(. Almost all stupid commercial sites do it. One line of text that says "WE ARE WONDERFUL" on one page, and then you have to scroll entirely to the next page on screen to read "BECAUSE WE SUCK". This Ring chat thing that was advertized on this list and on the ubuntu-devel-discuss list (that I happen to be accidentally a part of). Follows the same model. ALMOST NO INFORMATION. These days people try to draw users by KEEPING PEOPLE IN THE DARK. It all started with Dropbox (for me): https://www.dropbox.com That site basically says something like "We will make your life complete" and then "Now make an account, you turd faced fuck face. You are too stupid to understand otherwise anyway".
I once installed a wordpress instance on my server, quite pretty, but very difficult to manage, always spammed, a nightmare to move from a server to an other one... I gave up.
All spam gets blocked by akismet. Not a single spam ever gets through EVER EVER EVER EVER. I have never tried moving but.... there's not much to consider, all of it is gone, the thoughts of yesterday in spite still try to linger on. (Just a poem on my site). But I was not saying the software is outdated. I don't give a ..care about the software (it is still version 3.9.9.9.9.9.9 or so). No, the content is outdated and I cannot post anything on it that I can allow people to read.
I can make on my wiki all what I want: next to no admin work (new version have only to be written on the old one), never spammed, only one passwd for me (and no other can write on it), extremely robust...
Yeah well I am a WordPress hacker so. The only annoyance is that the mods I make get overwritten by updates :p. Becoming immune to spam is terribly easy (if you code a little). The bots don't read HTML pages. Well maybe the regular spambots do. The spambots that create accounts do not. They just hit the default links. If you slightly modify the default links, they are out of operation. I built a trap on my site, but there are few bots that actually follow it. I don't entirely understand why, but I suspect that they either: 1) spend more time revisiting than exploring sites 2) know about the structure of a wordpress site and parse the parts that they expect to find links in, instead of just parsing for all links. I have customized my site a lot (well, mostly the theme). I could even turn it into a plugin/addon :). Maybe I can even sell it :P. It would probably not have taken more than a month of work. But I never finished it yet. It is really awesome though.
gps page:
A site like yourself, I consider unreadable. The fonts are too small. I would have to increase the font size (zoom the page) but I rarely go to that length for a site. I mean the navigation mostly. Dokuwiki also has a lot of small-sized spaces by default. Small margins everywhere.
other skin, same wiki:
Oh that one looks pretty good. The best wiki theme I've seen, perhaps. Thank you, I guess.........
very customizable...
Never said it wasn't. The issue is not the techniques of customizing. The issue is usually the people that need to do it. And that need to have more right brain performance than me and most people in Linux ;-). Regards. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2016-03-31 at 23:19 +0200, Xen wrote:
jdd schreef op 31-03-16 22:55:
Le 31/03/2016 22:35, Xen a écrit :
Good looking usually implies readable.
Not to me... - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb9zpwACgkQtTMYHG2NR9WosACfT2kF8e1Iam4Ed8e+3m6dWuY5 IocAnib9uIjJgy8oLccaiEtLrK42P0Qi =pBBW -----END PGP SIGNATURE-----
Carlos E. R. schreef op 01-04-16 03:27:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2016-03-31 at 23:19 +0200, Xen wrote:
jdd schreef op 31-03-16 22:55:
Le 31/03/2016 22:35, Xen a écrit :
Good looking usually implies readable.
Not to me...
Then elucidate. You people seem hell bent on destroying everything that is attractive, or staying away from it as far as possible. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Xen schreef op 01-04-16 11:52:
Carlos E. R. schreef op 01-04-16 03:27:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2016-03-31 at 23:19 +0200, Xen wrote:
jdd schreef op 31-03-16 22:55:
Le 31/03/2016 22:35, Xen a écrit :
Good looking usually implies readable.
Not to me...
Then elucidate. You people seem hell bent on destroying everything that is attractive, or staying away from it as far as possible.
And this sounds a lot like Fear, Uncertainty, and Doubt. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday, 2016-04-01 at 11:52 +0200, Xen wrote:
Carlos E. R. schreef op 01-04-16 03:27:
On Thursday, 2016-03-31 at 23:19 +0200, Xen wrote:
jdd schreef op 31-03-16 22:55:
Le 31/03/2016 22:35, Xen a écrit :
Good looking usually implies readable.
Not to me...
Then elucidate. You people seem hell bent on destroying everything that is attractive, or staying away from it as far as possible.
To me, the current opensuse start web page is pretty. That doesn't mean readable, I have to click on places so that the text opens up. Search on a text may not work. I don't think it would pass the accessibility tests. Prettiness and readability are different things. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb+UXcACgkQtTMYHG2NR9WUUQCeMLkHAbEqaJ+bCeq8KZ0s1wNs cCsAniD1Ph3wuZ3gdoHa4OZYDWoZMNgB =chDM -----END PGP SIGNATURE-----
Carlos E. R. schreef op 01-04-16 12:46:
On Friday, 2016-04-01 at 11:52 +0200, Xen wrote:
Then elucidate. You people seem hell bent on destroying everything that is attractive, or staying away from it as far as possible.
To me, the current opensuse start web page is pretty. That doesn't mean readable, I have to click on places so that the text opens up. Search on a text may not work. I don't think it would pass the accessibility tests.
Prettiness and readability are different things.
At the same time it is not a site you want to spend more than 10 seconds of your time on. Or maybe even of your life. That would mean that it is repulsive. You can't even say it is unreadable, because there is not really anything to read on it either. I mean lets get real: there is not any content either. If you made this really attractive picture of a logo and made that your entire site, no text, no nothing, you could still say that it was pretty. You might even say that it was attractive -- for two seconds. Nevertheless it did not attract you for more than those two seconds, so so very attractive, it was not really...... Maybe it would be if you could actually look at it not from some cramped window, but with a clear view. Currently the website is not attractive for anything. So attractive is a much broader category than pretty, because it is about attraction, which is getting pulled towards something (and not repelled). Conversely, if something was really attractive and stayed attractive, you would stay with it. We can say "that is an attractive proposal". Attractiveness therefore implies a certain pleasurable experience. If you are coming somewhere to read, and you can't do it, then the prospect of reading is not attractive and you will be repelled. There was once upon a time someone who wrote a book about Quality. His name was Robert M. Pirsig and he wrote Zen and the Art of Motorcycle Maintenance. "Readability" is in this sense an aspect of Quality, which way may also call attractiveness if we use a broad sweep. You can also call it an aspect of enjoyableness or having a good time. A general word for it is "pleasant" or "agreeable". Pretty can add to pleasant and agreeable. At least for the eyes. But there is more to you than just eyes, you know. Of course something can be pretty while also being dysfunctional, because pretty makes a different statement about function (a different part of function) than something called readability or accessibility or usability does. But in a general sense they are all aspects of quality or of attraction. Quality is that thing that says that something is good, that it is good at what it does, so it relates to function. A high quality broom is a broom you can sweep with and that lasts a long time. A broom that burns very well, we usually do not call a high quality broom. So quality is related to purpose. If the purpose of the website is to inform people (rather than misinform, and keep in the dark, but convince through some kind of deviousness) then we can say it is a low quality website. People may disagree because they may disagree about its purpose, or its effectiveness in attaining something that is actually worth something. Pretty is important in that sense. It is clear that pretty is an important prerequisite in attaining that goal. But it is not the only thing. Of course you are right. But it also does not mean that those two things OPPOSE each other. They do not exist in contraposition of each other!!!!!! And that is the only thing I want to get across. Making something pretty does NOT take away from readability unless you fail on that aspect on its own. With those designer skills, you can also make something that is astoundingly readable, and in that case the prettyness ADDS to it. It is not one thing over the other, you need both. But if you give only attention to one thing, and nothing to some other thing, then well, yeah. The end result is not going to be great. But not BECAUSE it is pretty. More so, because people thought it was the only thing that mattered: because of NEGLECT. Eating well is not a bad thing, but if you eat so well or spend so much time eating well that you consider it beneath you to pay your bills or to reserve money for them, then it won't bode you well. But that is not a statement about eating well. That is a statement about having bad priorities and skewed conceptions. Some people would then feel that "eating well" would get a "bad rap" (reputation). Pretty might also get a bad reputation but that is not deserved. Pretty is not a problem. Corporate interests is a problem. You can be pretty and readable perfectly fine. In fact, if you screw up the prettyness so bad that you can't read anything, then you would have ended up at the other extreme. You need both and they complement each other, but they need to be balanced for that. They are not enemies, is all I am saying. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 31/03/2016 23:19, Xen a écrit :
jdd schreef op 31-03-16 22:55:
I still think DokuWiki is the best. My theme is not doing because I am so bad with colours but. I will post a screenshot later.
and it's pretty easy to customize one.
Yeah but not for me. Colour-blind === troubles.
if so you must know how "pretty" web sites can be unusable for color blind people...
that said, imho, you have two incompatible objectives: be readable or be good looking... You know what I choosed.
That's just nonsense. That's a false dichotomy.
Good looking usually implies readable.
full text is readable, paints are pretty
pretty:
You call that pretty?
you did, if I understand you mail
1. Create bad contrast with the background, or make the font too small.
but pretty don't care of this
"Now make an account, you turd faced fuck face. You are too stupid to understand otherwise anyway".
many web sites, not only commercials, don't even bother to say what the product is made for... a bit like man pages do not give examples :-(
All spam gets blocked by akismet.
nope. admin action needed all the time
Yeah well I am a WordPress hacker so. The only annoyance is that the mods I make get overwritten by updates :p.
this coudn't happen with pmwiki
Becoming immune to spam is terribly easy (if you code a little).
The bots don't read HTML pages. Well maybe the regular spambots do. The spambots that create accounts do not. They just hit the default links. If you slightly modify the default links, they are out of operation.
if you read openSUSE lists you should know how badly was the opensuse wiki hit by spam very recently. Some spam is made by real people
A site like yourself, I consider unreadable. The fonts are too small.
did you notice the big view/normal view button on the upper right? here big http://dodin.info/wiki/pmwiki.php?n=Main.HomePage?setview=big&setfontsize=110 but it's too much for my own use you have also skin options (little link at the bottome of the page) http://dodin.info/wiki/pmwiki.php?n=Site.StyleOptions but may be we go a bit too OT for this opensuse list :-( initially I was thinking it was a PM :-( jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 01-04-16 08:28:
Le 31/03/2016 23:19, Xen a écrit :
jdd schreef op 31-03-16 22:55:
I still think DokuWiki is the best. My theme is not doing because I am so bad with colours but. I will post a screenshot later.
and it's pretty easy to customize one.
Yeah but not for me. Colour-blind === troubles.
if so you must know how "pretty" web sites can be unusable for color blind people...
I am not completely colourblind. I see all colours but I often cannot differentiate them the way someone else can. So it is hard for me to work with colours. I have no issue with designs in that sense. And also no experience with it.
that said, imho, you have two incompatible objectives: be readable or be good looking... You know what I choosed.
That's just nonsense. That's a false dichotomy.
Good looking usually implies readable.
full text is readable, paints are pretty
Whatever.
pretty:
You call that pretty?
you did, if I understand you mail
No I thougth they were all hideious. Only www.wikidot.com (the commercial place for them) is pretty.
1. Create bad contrast with the background, or make the font too small.
but pretty don't care of this
Yes they do. But whatever. If you wanna keep and stay stuck and get stuck and stay stuck in stuff that is not even REMOTELY attractive or something another person will want to see, by my guest.... but not in my house ;-) :p.
"Now make an account, you turd faced fuck face. You are too stupid to understand otherwise anyway".
many web sites, not only commercials, don't even bother to say what
the product is made for... a bit like man pages do not give examples :-( Yeah. Instant wikipedia lookup mostly.
All spam gets blocked by akismet.
nope. admin action needed all the time
Euhm. Well that's true. My site currently has 76 spam in the spamqueue and that is NOT MUCH for the period of time I have been away from it. And ALL of them are in the "spam" queue and not in the "pending" queue so what I said still holds. There is no reason to deal with it, and if you want, you can empty the entire spam queue in one click. So not "admin action needed all the time" :)!. "76 comments permanently deleted" Well, that didn't take long.
Yeah well I am a WordPress hacker so. The only annoyance is that the mods I make get overwritten by updates :p.
this coudn't happen with pmwiki
You don't have to sell this product to me you know. Alright last mail to this list about wikis, unless someone knows some real good provider online. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 04/01/2016 05:50 AM, Xen wrote:
I am not completely colourblind. I see all colours but I often cannot differentiate them the way someone else can. So it is hard for me to work with colours. I have no issue with designs in that sense.
Of course, many times, the site designer chose horrible mixes. For example white & light blue can be hard to read. One time I was remotely working on a woman's computer. Her desktop colours were so horrible I had to change them to something I could work with. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
James Knott schreef op 01-04-16 13:07:
On 04/01/2016 05:50 AM, Xen wrote:
I am not completely colourblind. I see all colours but I often cannot differentiate them the way someone else can. So it is hard for me to work with colours. I have no issue with designs in that sense.
Of course, many times, the site designer chose horrible mixes. For example white & light blue can be hard to read. One time I was remotely working on a woman's computer. Her desktop colours were so horrible I had to change them to something I could work with.
For some reason that seems to be the truth about any and every browser theme as well ;-). And from my perspective, certain application themes as well in a certain software environment ;-). User-made themes seem to always make your experience worse and not better. You can scan and peruse hundreds of Firefox themes and not find anything that you can actually stand for longer than 10 minutes. This is my experience of certain window decorations as well. To the point that I start to feel that theming is just a bad idea. Well, the same is not true for wordpress, and there are some good KDE "panel" themes I guess. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
For JDD: Here is a screenshot of the theme I have slightly customized on DokuWiki: http://www.xen.dds.nl/f/i/screenshots/dokuwiki-begin-snapshot.png Looks a bit better with the browser address bar though. I can't really get the colours right, but if I did it would be a reasonably nice environment with a sense of a personal feel to it. The problem I have with DokuWiki is that: * customizing really requires changing the code and often requires writing something that could resemble a plugin. * there are many plugins that all seek to deliver more about the same functionality, of various types, but none of them is really great. * there is not a lot of great built-in functionality I can use. * the barriers to actually changing it meaningfully are therefore rather high, if you only want to create something for yourself, right this instant. TiddlyWiki, for instance, is much more fully developed and actually has a lot of /libraries/ that you can use to create your own scripts. DokuWiki seems rather badly coded - or at least a bit amateurishly, rough, not extremely elegant, if you dig through everything sure you can do it but I am used to WordPress and there is a big difference here. I mean, this is generally what the WordPress Codex looks like: http://codex.wordpress.org/Plugin_API/Filter_Reference/root_rewrite_rules EVERYTHING is documented and users help with documentation too. The code is also very elegant apart from some ignorant coding standards. In WordPress you can find everything just really quickly. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 31/03/2016 23:57, Xen a écrit :
For JDD:
Here is a screenshot of the theme I have slightly customized on DokuWiki:
http://www.xen.dds.nl/f/i/screenshots/dokuwiki-begin-snapshot.png
Looks a bit better with the browser address bar though.
I can't really get the colours right, but if I did it would be a reasonably nice environment with a sense of a personal feel to it.
a bit like mine (two clik away in options) http://dodin.info/wiki/pmwiki.php?n=Site.StyleOptions?action=set&setcolor=green
In WordPress you can find everything just really quickly.
but it's really hudge what I like in pmwiki is that it's minimalist at first and you can add what you want after that also the main devs are very friendly and present - and its runs very well on openSUSE (13.1 right now) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd schreef op 01-04-16 08:42:
Le 31/03/2016 23:57, Xen a écrit :
For JDD:
Here is a screenshot of the theme I have slightly customized on DokuWiki:
http://www.xen.dds.nl/f/i/screenshots/dokuwiki-begin-snapshot.png
Looks a bit better with the browser address bar though.
I can't really get the colours right, but if I did it would be a reasonably nice environment with a sense of a personal feel to it.
a bit like mine (two clik away in options)
http://dodin.info/wiki/pmwiki.php?n=Site.StyleOptions?action=set&setcolor=green
I would pick White and 1024px, already makes the site look better yes?
In WordPress you can find everything just really quickly.
but it's really hudge
what I like in pmwiki is that it's minimalist at first and you can add what you want after that
also the main devs are very friendly and present - and its runs very well on openSUSE (13.1 right now)
I don't care it is good to find (for me). Also on linux searching in files is easy. I just have some script and if I need a keyword, it outputs a list of files. (like grep -r "keyword"). This way finding stuff in the source is pretty easy (with vim also easy to get there and open). But okay let's get off-list if we need to from now on. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/30/2016 07:15 AM, Carlos E. R. wrote:
With Win 95 I typically used a different partition for the Documents folder. Suprisingly, it is not that simple to do with current versions; aparently, you have to do it for each user.
I recall advocating this at various clients for their IT people to do and, yes, it was not as easy as with *NIX and yes it had to be done for each user. Later, when Microsoft supported corporate (aka "pro"?) versions and Windows Server (though some clients preferred SAMBA on BigIron UNIX for transparency and reliability and scalability [as well as cost effectiveness]) there were ... what was it called, "roaming shares", so people cold do what we always did with SUN workstations back in the early 1980s, and log in from anywhere and get our "home" mounted (using NFS) on the particular workstation we were using.
Also interestingly, other Linux distros don't do a separate /home folder, so apparently there are reasons both ways.
There's sense in that if you are using BtrFS (or similar) since there are optimizations that make more sense in one big unified file system. There's also the logic that trying to work out provisioning between what should be the ROOTFS and what should be the HomeFS is a bit crazy. We've seen the idea that an installer might try to make a 10G ROOTFS on a 2T drive and leave the rest for /home! In that world simply saying make the whole 2T a BtrFS and have done with it is quite logical. Current BtrFS, I'm finding, is reliable enough for that. If I were running spare hardware enough I'd give that a try, but I don't have the time (or inclination) to adequately exercise such an installation. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-30 13:01, Anton Aylward wrote:
On 03/30/2016 07:15 AM, Carlos E. R. wrote:
With Win 95 I typically used a different partition for the Documents folder. Suprisingly, it is not that simple to do with current versions; aparently, you have to do it for each user.
I recall advocating this at various clients for their IT people to do and, yes, it was not as easy as with *NIX and yes it had to be done for each user.
Later, when Microsoft supported corporate (aka "pro"?) versions and Windows Server (though some clients preferred SAMBA on BigIron UNIX for transparency and reliability and scalability [as well as cost effectiveness]) there were ... what was it called, "roaming shares", so people cold do what we always did with SUN workstations back in the early 1980s, and log in from anywhere and get our "home" mounted (using NFS) on the particular workstation we were using.
That worked because the things in your home directory were things that you wanted to share on all machines, like your .bashrc But the whole model is broken now that platform-specific apps tend to keep their private data in dot directories in your home. So you can't share your home directory across platforms, which then means mounting the shared home somewhere else and symlinking all the locations of local home with documents etc to subdirectories of the shared home. It's a complete pain and would work much better if platform-specific data were held in a different place, with a symlink from home: e.g. /home/user/platform-specific-data => /var/home/user/platform-xyz-01 So all my display-related options were in a local directory on that machine. Then I could go back to NFS-mounting /home directly. Cheers, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/30/2016 09:58 AM, Dave Howorth wrote:
On 2016-03-30 13:01, Anton Aylward wrote:
[...]
That worked because the things in your home directory were things that you wanted to share on all machines, like your .bashrc
No. that's true but its incidental. Yes the application ran on the remote machine so the scripts and binaries were all there for it. The point I was trying to make is that with X I can run a dumb X-terminal. I *have*, as I mentioned, run dumb X terminals in the terminal rooms at USENIX and InterOp of the past. No `/home on those! The point I was trying to make is that X does not need to transfer a frame-buffer.
But the whole model is broken now that platform-specific apps tend to keep their private data in dot directories in your home. So you can't share your home directory across platforms, which then means mounting the shared home somewhere else and symlinking all the locations of local home with documents etc to subdirectories of the shared home.
That's true for Windows but it doesn't matter for the case I was talking about. And it need not matter in general. The SUN "log in anywhere" case, the 'remote shares' had your ~/home on the server anyway and NFS mounted it local to your workstation. But again that's irrelevant; if you used X to log in to the server you would not be running the client application locally, they would be running on the server with your dot files local to them on the server.
It's a complete pain and would work much better if platform-specific data were held in a different place, with a symlink from home:
See above. The purpose of my post was not to go on about this but to make it clear that a) X11 is intrinsically a network protocol b) Wayland is not a local display protocol and still needs an X "shim" to do network access. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, 2016-03-30 at 14:58 +0100, Dave Howorth wrote:
But the whole model is broken now that platform-specific apps tend to keep their private data in dot directories in your home. So you can't share your home directory across platforms, which then means mounting the shared home somewhere else and symlinking all the locations of local home with documents etc to subdirectories of the shared home.
I link the Documents, Videos, Photos folders. If other apps create their own directories and they are large, I have to symlink them, too.
It's a complete pain and would work much better if platform-specific data were held in a different place, with a symlink from home:
Quite. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb8JCYACgkQtTMYHG2NR9XshACff+vxzqwQyXMx6mq2i0zrrYB8 BAsAn0EqXNhOl3pyKKvPAp22igGRUBT2 =yupo -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/30/2016 06:58 AM, Dave Howorth wrote:
But the whole model is broken now that platform-specific apps tend to keep their private data in dot directories in your home.
I thought it was more due to the fact that we aren't working with dumb terminals anymore. Roaming around the campus was easy when the only thing that varied was the chair you sat in. -- After all is said and done, more is said than done. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/30/2016 09:28 PM, John Andersen wrote:
Roaming around the campus was easy when the only thing that varied was the chair you sat in.
And finding a free terminal that wasn't broken. ;-) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, 2016-03-30 at 08:01 -0400, Anton Aylward wrote:
There's also the logic that trying to work out provisioning between what should be the ROOTFS and what should be the HomeFS is a bit crazy. We've seen the idea that an installer might try to make a 10G ROOTFS on a 2T drive and leave the rest for /home! In that world simply saying make the whole 2T a BtrFS and have done with it is quite logical. Current BtrFS, I'm finding, is reliable enough for that. If I were running spare hardware enough I'd give that a try, but I don't have the time (or inclination) to adequately exercise such an installation.
My custom is to first create a small install, say 15 GB, in a single partition, plus swap, then create one or two installs for real. Or one for real, another for testing the next release (15..20GB). The first one is for emergencies, a rescue system. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb8I6YACgkQtTMYHG2NR9UuPwCfd8pl8HxGm6aTM8i8xJjwwRYF F4gAn2H4VrmijvVSOvNeRqqikrYk1BsA =xHPJ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Mar 25, 2016 at 12:51 PM, jdd <jdd@dodin.org> wrote:
Le 25/03/2016 18:09, Chris Murphy a écrit :
Hardware driver and kernel reliability has improved immensely between XP and 10, it's a significantly more stable operating system with far fewer instances of kernel panics / blue screens.
I don't know how you can prove that.
The engineers have said that this was a design goal for a long time, when it was implemented I don't know, and no I can't prove that the design goal translates into reality. That part is anecdotal, by talking to Windows sysadmins, who have other problems, of course, with Windows 10. But quality and stability of the OS itself isn't one of them, they've generally been positive about it.
the last few days, I had probably 5 Windows 10 crashes, that is more than one fore each hour of use, on my new windows 10 tablet, but instead of a blue screen one have now a crash report lasting long before releasing. In this respect I had less problems with Windows 7.
But I don't use this system sufficiently to make statistics.
I know nothing about recent development system of Microsoft, but I guess they may have also an automated system and, for sure an immense beta tester base (I was one of them in an other life :-)
If they have one tiny edge case bug affecting 1%, it affects 2 million users. It's an instantly massive support problem.
"If something goes bad, we'll just tell them to use System Restore" I don't know what.
I *never* on more than 20 years use seen Windows repair system repair anything, even problems I could fix myself in minutes. To restore you have to get a restore disk, and pretty often Windows refuses to make one. last week, Windows 10 insisted to use dvd on the same tablet, which of course have no dvd (I was happy to have an usb dvd writer). Not possible on sd card nor usb stick...
Windows 8 and 10 have the concept of "refresh" and "reset" now. The refresh keeps your data, system settings are returned to defaults, applications from the app store are kept installed, applications installed outside the Microsoft app store are erased. And then a reset blows away everything including OS updates, it basically reformats the primary volume and reinstalls from a restore partition on the drive.
That said I neither could restore a linux system after a crash (no problem for data). It was much faster to reinstall the system than to fix the handful of setup needed by the hardware change
by the way I didn't read anything about SSD (see subject), but noticed on new install a "ssd" option in fstab for file systems
ssd is a Btrfs specific mount option. It's used automatically if an SSD is detected, so it isn't needed as an explicit mount option unless you want the other ssd allocating algorithm, ssd_spread, or if you're using Btrfs on a layer that masks the fact it's on an ssd. The ssd option isn't the same as discard. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op 25-3-2016 om 19:51 schreef jdd:
I don't know how you can prove that.
the last few days, I had probably 5 Windows 10 crashes, that is more than one fore each hour of use, on my new windows 10 tablet, but instead of a blue screen one have now a crash report lasting long before releasing. In this respect I had less problems with Windows 7.
But I don't use this system sufficiently to make statistics.
I know nothing about recent development system of Microsoft, but I guess they may have also an automated system and, for sure an immense beta tester base (I was one of them in an other life :-)
My Windows 8.1 system starts crashing applications when I have less than 60% free available memory due to failing memory allocations or something of the kind. I have 4 GB of RAM on a 64-bit system. I will have 40% of that (like 1600 MB) available as free or non-dirty buffers and I will get popups saying I am running out of memory. Then my web-browser will fail because of a memory allocation fault probably. I also have about 500 MB of swap in addition to that memory. My life in XP was entirely without problems all the time. I have reduced the swap because otherwise Windows will start thrashing the disk for no real apparent reason. In Windows 10 on two different systems I have experienced corrupted user profiles within weeks of installing. The corruption would cause the Start menu to stop opening (The Start Screen). This rendered the system almost unusable in terms of opening programs. There was no fix for the situation. The solution seems to be to back up your user profile constantly in case it happens. User files are not covered by System Restore either. Because they are document files, or something of the kind. (A system restore will preserve your documents). I have not experienced any other problems in W10, apart from not being able to log in or log on to a different user at the start, or at all, as well as extreme slowness on an older laptop. Actually, that is a lot. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
You my friend are just being ridiculously offensive and I will have none of that. If this is the way the list goes, I am no longer a part of it, and I expect someone here to condemn it or at least say something about it. I have no need to listen to such nonsense. Op 25-3-2016 om 18:09 schreef Chris Murphy:
I have no problem with opinions, even excessively verbose opinions that constitute bombarding the list with such a superfluous volume of material that no reasonable person would read it.
That is nothing but insulting me.
Many people confuse correlation with causation. But you've shown neither. Your entire argument rests on "saying ridiculous things makes them true."
These are nothing but unclaimed statements and it is nothing but insulting me.
You didn't actually qualify how Windows 10 quality or reliability is lower than Windows XP, you haven't made any argument that ties it to the ability to revert, which has always been possible in one form or another.
Hardware driver and kernel reliability has improved immensely between XP and 10, it's a significantly more stable operating system with far fewer instances of kernel panics / blue screens. I've seen no change in this regard on OS X, which is also contrary to your assertions. For your hypothesis to be true, we'd expect OS X quality to increase due to the lack of reversion. Both OS X and Windows have suffered regressions in bundled application software quality if the mood and assertions of various forums is trustworthy (they're not scientific samples so in no way can this be an impartial analysis). You are confusing so many things it is not even worthwhile to explain
You clearly confuse giving an opinion and explaining something about it, with the need to give proof or something similiar in order to convince a person that does not need to be convinced. If I explain something to you, that is a gift, not a requirement. I have no need whatsoever to quality and quantify everything I say to you. You do not need to be convinced. I offered an opinion. You do not know what they are. The only interest you seem to have is breaking down everything I say. Then leave here. them. Go back to kindergarten maybe? You seem to think I have created or asserted a direct and 100% correlation between thing A and thing B, and then anything shows it is not 100%, it means the whole thing is junk. That is like saying gravity is not real because birds can fly. But instead of explaining why they can fly, you just assert that they can and now put some burden of proof on me to show that gravity is still real. A real man would say "granted, ...., although". You grant me nothing and only come up with ridiculous reasoning, that you accuse me of yourself. I have never said lack of reversion would increase software quality. You are so riled up in your position and argument that you do not even try to understand what I am saying, BECAUSE YOU DON'T WANT IT TO BE TRUE. Such an asshole really. For real. Well, I accept and am grateful that you are at least trying to make a logical argument, because it is the first time you've done it. That feels like being taken serious in that sense, so thank you. But obviously if you are going to be making such statements, you have to split everything out. Is that worth the trouble? "For your hypothesis to be true, we'd expect OS X quality to increase due to the lack of reversion." OS X quality is already very high and there is no reversion. Obviously, they could be correlated. For the obvious reason that a company that introduces reversion must have a reason for doing that. Not saying Apple would, Apple is not always that bright. You attack so many strawman's it is just silly. But this seems to be the case with everyone that doesn't understand something, they attack a strawman. I have done it myself probably numerous of times. "For your hypothesis to be true, we'd expect OS X quality to increase due to the lack of reversion." Only if it was the only contributing factor. OS X (Apple) already has a very high demand for quality due to its business model and the scope of its system is really a lot smaller than that of say MS Windows as well. The scope of its hardware is smaller. They purposefully limit the amount of things that can go wrong. You know this. In general you could expect ANY software system to increase in quality. Isn't that what it's supposed to do? Having no reversion may put a high pressure on OS X's developers. They may not like that. You do not recognise that I also make statements in favour of reversion. Cause you gotta stay real. I do not need to prove anything to you. If you insist on remaining in denial about a very simple relationship that is obvious and worth knowing about, you know, suit yourself. What do I care really? I'm not you. I was stating an opinion, you attacked it, because you don't like opinions being expressed that don't agree with your own, apparently. Same could be said for me, at some points. I concur, I digress. "For your hypothesis to be true, we'd expect OS X quality to increase due to the lack of reversion." No it wouldn't. We would expect there to be a high pressure on developers to create perfect or flawless or never-failing systems, AND THERE IS. At least you could assume there would be. I did read a book about Apple. NO I'M NOT GOING TO SAY EVERYTHING THE BOOK SAID just to prove my 'point'. The Apple mongtards put a 2.5" HDD in the Apple Mini because the enclosure they had chosen was like a millimeter too small for a 3.5" HDD drive. There are better reasons for choosing a 2.5" HDD in such a small computer, but that is the actual reason they used. Form factor came first and dominated everything. Had the thing been a millimeter larger, they would probably have gone with 3.5". Just saying Apple is not always rational or anything of the kind ;-). There can be many contributing factors to software development. I posited a relationship. The relationship is not the be-all and all and it is not going to explain everything including the disruption and resurrection of the universe, okay? The relationship was about the pressure there would be on developers for ensuring the system does not fail. If failure is less of an issue, the idea posited that the pressure on developers would henceforth become less. Among many other thing but that's the gist of it for now.
Granted, there may be many more reasons for Microsoft's recent change in how they develop their software. But they've also gone in the direction of Linux ;-). The release cycle has become much smaller (faster) and current day Windows versions are released with the idea of "We'll fix it later". If that doesn't sound like "If something goes bad, we'll just tell them to use System Restore" I don't know what. You don't know. That much is certain. Yes, another plain insult.
Of course I will accept that System Restore only plays a small part in the overall reasons why Microsoft does anything. It could however play a much larger part in the reasons for Linux vendors or operators doing what they do. Microsoft has customers to please. Having a System Restore is obviously a great boon for customer support. Customer support in general has this staple of general fixes and strategies they ask any person having any problem to use. The lack of quality in today's windows is due more to their tablet strategy and whatever else they have going on in their heads (the cloud, etc) than anything to do with System Restore, of course. For Windows, this feature is just a small thing. Right. For Linux it is something bigger. Windows updates fail too and it reverts them. They fail a lot I must say, I have at least 3 of those instances in the last years that my computer or device would revert an update. And if you turn your computer off while it is doing anything? You're screwed. System Restore has a much smaller impact on Windows and for Apple it would be a relief (I mean the reverting of updates). Reverting an actual update is in essence not the same as using a snapshot. A /reversible process/ is not the same as /zapping back to a previous state/.
] And logically, it is a pretty safe argument that when a system restore is in place, the requirement for software to always function as well, becomes less. No it's not. You've made no connection whatsoever. It's just an assumption that you expect everyone to accept on the face of it. You've provided no mechanism how reversion alters a totally orthogonal software quality assessment. Your style of argument is, "the sky is blue therefore clouds are white" and you've put a layer of "DUH!" on top of that. It's a stupid argument and no one should buy it.
You seem to have an agenda here. "No one should buy it". What do you care about what anyone else does? No connection? Sure you can deny anything. You know, all your life, for the rest of it, if you want or please. I have clearly stipulated a connection. If you were in it for debate or reasoning, you would assault the very words I have made, instead of making blanket statements that disqualify everything in one heap, in one go, by refusing to go into the actual content. Actually that is similar to using a snapshot: a snapshot does not need to know about semantics. It does not need to know about the content of files. A snapshot would "refute an argument" by simply throwing it away, instead of tackling it head on. You say all of the thing I didn't do supposedly but you are afraid of speaking of the things I did do.
It is a pretty simple argument you know. It's pretty simply unconvincing. And I'm not reading any more of it, I'd rather watch water boil than read anymore of this.
Pathetic man, really. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2016-03-24 at 21:27 -0600, Chris Murphy wrote:
On Thu, Mar 24, 2016 at 6:35 PM, Xen <> wrote:
...
So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time.
Versus starting over with a clean installation? That is the original rollback.
So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad.
Hmm, broken state or reinstall. You get away with this when the testing is monumental, like what Apple does, who have no reversion options for updates. OS X, you update to a sub version, that's it, you can't undo it. But they also do a metric ton of testing. It's so complicated now that they even have expanded their pool to public beta testers. For iOS, there isn't even a revert possible. You can only reset which obliterates apps, settings and user data, but not the most recent update you applied.
Nah, I'll take a snapshot and wait a week thanks.
Me too. I have seen software updates rollback in very expensive unix systems (in the million dollar range). With software updates very thorughly tested, because they were fined if the update failed and caused even a minor downtime or some loss of functionality. Even so, they had rollbacks. And that started around 1990 or earlier. I could not look at the details of how it was done, but I think they made links, storing both the old and the new versions in different directories and made links for the individual files replaced. Depending on how things went out in the field, they deleted the new or the old copy, adjusting the links as were necesary. Of course, the number of files that had to be touched was limited, compared to a modern Linux. For more complex updates, they could separate one side of the mirrorred disks, and do the update on one side of the system mirror only. This was done, I believe, for full system upgrades, but might be done for any procedure. So yes, I like the feature. It doesn't mean I'll use btrfs yet... but I'm interested ;-) - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb1DMoACgkQtTMYHG2NR9W6kwCfawhvUqqBIJIoQjMFttRs/Anu 9GMAnA/VGEYLcMuxauDDarOiZtPjBm2m =y6h/ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op 25-3-2016 om 11:02 schreef Carlos E. R.:
Me too.
I have seen software updates rollback in very expensive unix systems (in the million dollar range). With software updates very thorughly tested, because they were fined if the update failed and caused even a minor downtime or some loss of functionality. Even so, they had rollbacks. And that started around 1990 or earlier.
I could not look at the details of how it was done, but I think they made links, storing both the old and the new versions in different directories and made links for the individual files replaced. Depending on how things went out in the field, they deleted the new or the old copy, adjusting the links as were necesary.
Of course, the number of files that had to be touched was limited, compared to a modern Linux.
For more complex updates, they could separate one side of the mirrorred disks, and do the update on one side of the system mirror only. This was done, I believe, for full system upgrades, but might be done for any procedure.
The difference between a rollback and a backup in a sense is that a rollback is instant, a backup takes time. But again, I prefer good backup and restore solutions to be in place for myself. Why? A rollback will only help you in your current system. When you have a good restoration backup procedure, you will become much more powerful. It is just a lot easier for me to achieve the rollback than it is for me to achieve the backup. But again, I feel that they compete with each other, and with more people using rollbacks, less will be interested in having good backup solutions. I feel this to such an extent, that I don't want any energy to go into the rollback, and all of this to go into the backup. Well, that's just me, but I love it in that way. At the same time I was using snaphots to create the backup. But that is just a shortcut really that I don't like. The shortcut involves not having to know your system. I get the advantages for an always-on system. I get the superior ability to do that stuff without interrupting a system. I get that. But I don't like it. Well, that is just the 'opinion' I had wanted to share. That is just my perspective and what my belly tells me really. "Listen to your tummy. The tummy knows." It's just a perspective right. I wanted you to have it ;-). To see it for a change ;-). Right.
So yes, I like the feature. It doesn't mean I'll use btrfs yet... but I'm interested ;-)
For me the reason I have been toying with snapshots is because of LVM. But I never ever did like it. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/25/2016 06:02 AM, Carlos E. R. wrote:
Nah, I'll take a snapshot and wait a week thanks. Me too.
I have seen software updates rollback in very expensive unix systems (in the million dollar range). With software updates very thorughly tested, because they were fined if the update failed and caused even a minor downtime or some loss of functionality. Even so, they had rollbacks. And that started around 1990 or earlier.
Indeed. Rollback was there on mainframes - aka REAL COMPUTERS - a long, long time ago. When the Big Iron companies moved to their versions of UNIX their users, moving from room-filling mainframes to something that would fit in a closet and needed no raised floor and HVAC they expected the basic principles of operations and system management to continue, the same degree of reliability, even if it was implemented differently and the OS had a different name. So yes, those big, expensive UNIX systems were a Carlos said, and they did have rollback. I rather freaked out when I went to work as manager at a data centre for a high-street chain of stores; they didn't have a test replicated machine-database. Development was done on the live machine and live data. The VPIT was new and he too freaked when I pointed this out to him. (IT also covered the call centre and HR which weren't in my purview but were in his.) The developers were all using root login but n RCS system and every week a service tech came by with a CD of 'patches' that were applied. Actually no-one even knew he was doing that. I freaked when I found him at the console one day. No activity was being logged. Nothing was being documented. So many changes were implemented to bring operations into accord with good practices. It upset the developers, it upset the company with the service contract, but as this thread is making clear, it avoided issues of downtime, facilitated planning of hardware maintenance (such as microcode updates), disk balancing and preventive maintenance. We, the VPIT and myself, had to wave the CMM principles - out of chaos comes order - and show that this contributed to planning and budgeting. Yes, downtime would be a million dollar issue. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday, 2016-03-25 at 00:39 +0100, Richard Brown wrote:
On 24 March 2016 at 22:32, Carlos E. R. <robin.listas@telefonica.net> wrote:
That's what I thought btrfs was for... on {home} it would be very useful. But snapshots are timed events, so they might not catch this.
...
snapper has a number of other options to trigger based on user activity.. such as pam_snapper http://snapper.io/manpages/pam_snapper.html to create a snapshot for each user login and with the non-root users you can easily set up snapper to do whatever you want with your home directory https://lizards.opensuse.org/2012/10/16/snapper-for-everyone/
Conceptually, it's a simple as setting up a subvolume in btrfs, creating a snapper config for that subvolume, and then telling snapper to do it's thing whenever you want it to take a snapshot
...
If you want to be particularly careful with a specific file, something like inotify could be used to make sure that snapper always takes a snapshot whenever a certain file is changed (note: If the file is changed a lot, you might need to tune snappers cleanup routines accordingly ;))
My idea is to have a snapshot taken each time a file in, say, ~/Documents/*, is fully saved, geting a behaviour equivalent to what the VAX/VMS did: file.txt;1, file.txt;2, file.txt;3... Not partially saved or temporary files, but an history of (binary) saved files. - -- Cheers, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlb0oi0ACgkQtTMYHG2NR9XkSQCfalseWgRe/NxPYiP/nFc/EUHd pOcAnivrNR83gmzYNYwegXy/RkodRntc =z4lI -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 02:07 PM, Chris Murphy wrote:
OK but using dmsetup instead of lvm tools is more clarity and openness, but do you do it that way? Even more clear and open is using a hex editor to directly modify the metadata on the hard drive sectors.
As opposed to using butterflies, you mean? http://imgs.xkcd.com/comics/real_programmers.png
There's also an attribute of tediousness, is what I'm getting at. And while I recognize why there are separate steps for pvcreate, vgcreate, lvcreate I find it tedious most of the time.
I'm of the old school of "each does one thing"; I don't like Swiss army knife programs. I've had to maintain them and while I normally don't make use of expletives, those are the kinds of programs that drive me to do so. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/23/2016 05:30 PM, Dave Howorth wrote:
I believe the default assigns more than 10GB to the root partition.
I don't know whether it's a default or the OP chose it, but not allowing enough space for the root partition has been a classic mistake since long before Linux existed. Hence Anton and others love of LVM since it could cope.
Indeed! I recall, painfully, install UNIX/386 on a 10G DRIVE a few times. I installed over and over, one big partition, as much as I could, and running 'du' to find out what parts of the tree took how much space. Packaging in those days wasn't as precise or granular as it is now :-( What could I prune so I could get the most space free for the database? *sigh* How I came to hate the term 'provisioning'! See 1b at http://www.thefreedictionary.com/provisioning -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/03/2016 00:32, Anton Aylward a écrit :
I recall, painfully, install UNIX/386 on a 10G DRIVE
10 *G*, are you sure? I used some time ago a sun pizza box with only 1Gb disk and openBSD without problem. I don't know what unix you used, but any BSD was much smaller than Debian at the time but I also used some time before 10*Mb* hard drives and there it was pretty hard to have any thing else then plain dos :-) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/24/2016 03:56 AM, jdd wrote:
Le 24/03/2016 00:32, Anton Aylward a écrit :
I recall, painfully, install UNIX/386 on a 10G DRIVE
10 *G*, are you sure? I used some time ago a sun pizza box with only 1Gb disk and openBSD without problem. I don't know what unix you used, but any BSD was much smaller than Debian at the time
but I also used some time before 10*Mb* hard drives and there it was pretty hard to have any thing else then plain dos :-)
Yes, I put UNIX V6 and V7 on a PDP-11 with RL-01 and ROL-02 drives and disk packs. The RL-02K disk cartridge held just over 10 Mbytes. A 'generation" later I had the unfortunate experience of wroking with HCR's XENIX for the 386 with on a box with a custom hardware MMU. Again a 10M disk. Then later, having to shoehorn in a SCO UNIX on a Data General PC with a 10G disk. that machine actually caught fire and destroyed itself. A few more attempts at SCO on PC/386 machines before I convinced clients that it was worth investing in a 20G or 30G drive. The additional cost in hardware was less than the cost of my time battling with provisioning. By then there was a LOT more in SCO's UNIX than there was in V6 UNIX for the old "11". -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (15)
-
Andrei Borzenkov
-
Anton Aylward
-
Carlos E. R.
-
Chris Murphy
-
Dave Howorth
-
Dave Howorth
-
James Knott
-
jdd
-
John Andersen
-
Richard Brown
-
Tom Kacvinsky
-
Uzair Shamim
-
Uzair Shamim
-
Xen
-
Yamaban