It seems you have a problem with people giving their opinion. Op 25-3-2016 om 04:27 schreef Chris Murphy:
And I don't like that model. It gives incentives for bad software products because a disaster recovery mechanism is in place. You can easily see how much more Shoddy Windows has become since for example Windows XP/Windows 2000. I don't see at all how 19 is more shoddy than XP, and seeing as all of
On Thu, Mar 24, 2016 at 6:35 PM, Xen <list@xenhideout.nl> wrote: . these versions have the same ridiculous update mechanism it doesn't seem related to the quality or reliability of the OS.
Windows' update mechanism (I take it you mean Windows Update) has nothing to do with my argument as it is neither to do with the disaster recovery (if that were a reason) nor does the update mechanism say all about the OS. So your argument doesn't follow. The shoddiness of the OS (that you contest, with no good reason) and its evolution was posited as a result of not having to make good software because you can revert. Granted, there may be many more reasons for Microsoft's recent change in how they develop their software. But they've also gone in the direction of Linux ;-). The release cycle has become much smaller (faster) and current day Windows versions are released with the idea of "We'll fix it later". If that doesn't sound like "If something goes bad, we'll just tell them to use System Restore" I don't know what. From my recollection Windows has gotten more recovery options (in the boot environment of the installer) but I don't know really. They seem to depend on automatic recovery options a lot. My experience though as vastly changed since XP, I don't know about yours. But the update mechanism did change, because since Windows 8 (or 7) the system will install updates during shutdown and bootup, which it didn't do in XP. So I don't really know what your argument is based on, but from my perspective Microsoft Windows did come to rely more on the system restore functionality now. And logically, it is a pretty safe argument that when a system restore is in place, the requirement for software to always function as well, becomes less. It is a pretty simple argument you know.
So people are like, oh we can't really make the SYSTEM function well, we will just ensure that any big error can easily be recovered from by going back in time. Versus starting over with a clean installation? That is the original rollback.
What are you stating? It is pretty clear that if my argument is sound, that an expensive rollback would give more incentive to write good software, and an inexpensive one would henceforth give less incentive to write quality software, because the pain of messing up becomes less. Let me spell it out for you then. Cost of system failure is X. Cost of ensuring proper operation is Y. Cost of rollback is Z. If X is a constant and Z is a variable depending on e.g. the availability of backups or snapshots, and Y is a variable relative to how good something has to be. Then with high Z Y can also be higher because Z and Y are both costs in avoiding X. With low Z the ease of avoiding X no longer warrants a high expenditure on Y, because even though with low Y the risk of X goes up (chance goes up) the actual impact of it goes down because you can more quickly recover. Therefore, if risk is really a calculation of P*X = R, and we call P the chance of X happening, and X the cost of it happening..... Then R = PX and P might at first be in an inverse relationship with Y, so we could suggest that P ~ 1/Y. In terms of the actual expenditure of Y we make. We spend more in Y, P goes down. We spend less in Y, P goes up. "Chance" then relates to frequency of disruptive events happening. But the actual COST of a disruptive event diminishes greatly when a recovery is easily done. More disruption but faster recovery = about the same thing. So actually X is not constant and we could say it is related to Z, we might say they are linearly related even. X ~ Z. Cost of disaster is actually determined by cost of rollback, and with easier rollback, RISK goes down, because we have lower X, the same P, but lower Z means lower PX means lower R. This is intuitive and logical. Easy rollback, lower risk. But since the expenditure in Y was warranted by the importance of a functioning system and was always spent to GET this functioning system, and this yields us some benefit B, then the ultimate benefit B(net) could be B - R. We have a certain benefit, but it gets reduced by failure. Y may be related to B as long as Y is meant for introducing features and so on, core functionality but not core reliance which is like another dimension to it. It is on another axis. So we could split it up to a Y(functionality) and some Y(robustness). Functionality always needs to be there and it doesn't take up the lion's share of development I think. However, whatever we may think, Y(robustness) is warranted by a need to reduce the cost of failure. If there was no cost to failure, there didn't need to be any robustness at all, cause apparently either we don't need it to keep working, or we don't lose money (time, energy, ....) when it fails. So if the call to spend on Y (robustness) is warranted by the risk R of system failure, then a lower risk will want less money to be spent on robustness. It is just less needed. You can get the same ultimate benefit with less money because recovery is so easy that it doesn't matter if something fucks up really bad. On a regular basis. B(net) = B - PX. If we have the same amount of money available and we need to spend less on Y(robustness) it means more is available for Y(functionality) which means B goes up and we get a sort of runaway system where it can no longer be warranted to spend on robustness (to that extent that recovery still has a cost) because with the same money we can also get more functionality! (Just assuming these are the only two dimensions). M = money available == Y(r) + Y(f) B is now a function of Y(f) P is a function of Y(r) X is a function of Z. Y is expenditure. Z is cost of recovery X is cost of failure P is chance of failure. That means the ultimate net benefit is a function of Y(f), Y(r), and Z, in some way. Such that we could simply suggest that: B(net) = a. Y(f) - b/ Y(r) * c. Z, where a b and c are just some constants, and we just assume linearity everywhere. So disregarding the constants: B(net) = Y(f) - 1/Y(r) * Z = Y(f) - Z/Y(r) B(net) = Y(f) - Z/Y(r) Lower cost of recovery, higher benefit. Higher expenditure in robustness, higher benefit BUT total expenditure on development is Y(f) + Y(r) Higher expenditure in functionality, higher benefit. So there are two equations: B(net) = Y(f) - Z/Y(r) and M = Y(f) + Y(r) or Y(r) = M - Y(f) Of course this would be a dynamic system with differential equations, but. Substituting M - Y(f) for Y(r) yields: B(net) = Y(f) - Z / (M - Y(f)). And the other way around: B(net) = M - Y(r) - Z / Y(r) Discounting M now: B(net) = -( Y(r) + Z / Y(r) ). As Y(r) goes up, benefit goes down (because Y(f) goes down but it also goes up because the cost of failure goes down. However I think it would be easy to prove (if you had full data and real equations on this) (not sure I can do it though) that: If Z is very low, the mitigating impact of 1 / Y(r) becomes less important and the detrimental effect of Y(r) becomes more prevalent as you spend less on features and more on robustness while it has no direct economical benefits because disaster recovery is so cheap. Now the more fine-grained this thing is, the easier it will probably get to recover from something. However I conjecture that fine-grained control could actually increase the enjoyment of the system to such an extent that it becomes a feature in itself. Because it becomes like a version history control system right. It becomes a versioning system. It basically starts to mean that all files are getting versioned. Fine-grained snapshotting is actually a crude but effective and perhaps very fun and pleasant VCS. When THAT happens resources are actually freed once more for other stuff. It could empower people so much. That it's a form of robustness in itself as well. No longer disaster recovery. But a robust system. And what I mean is that the joy of that could prompt people to become more productive in their linux systems, also enhancing development and also creating the software quality that I want so much. Because the more enjoyable Linux is, the more enjoyable it can become. Right now we're still stuck in something that really slows down development speed and pace. A really good fine-grained filesystem-level versioning system could actually make it a bliss to use in that sense (if the interface to it would be any good, ie. integrated into Dolphin etc). But apart from that conjectured benefit and relationship: B(net) = M - Y(r) - Z / Y(r) Until that makes a difference (and you'd really need file-level control for that) (and very regular micro-snapshots) (etc. etc., you can think of it) -- A low Z the negative term Y(r) becomes much more important than the mitigating term 1/Y(r) and spending money on robustness would become (or be, already) a DETRIMENTAL factor on total net benefit of your system. But so the thing really is more interesting because a very low cost (or rather high functionality) recovery system is in itself a boon that can speed up development of all facets of a system, including robustness etc. etc. etc. Because the more you are at ease, the more space you have in yourself to look at the details. So from this I conclude two things: - if a recovery system is only to be used for full system recovery, it will lessen the incentive to make quality software - if a recovery system is fine grained enough and usable enough, it will heighten the expediency of development.
So that's just what I am saying, that snapshotting is in essence not a satisfactory thing and just a roundabout way to make a system function that is otherwise horribly broken. Instead of fixing the system, you ensure that it can't hurt you anymore - so bad. Hmm, broken state or reinstall. You get away with this when the testing is monumental, like what Apple does, who have no reversion options for updates. OS X, you update to a sub version, that's it, you can't undo it. But they also do a metric ton of testing. It's so complicated now that they even have expanded their pool to public beta testers. For iOS, there isn't even a revert possible. You can only reset which obliterates apps, settings and user data, but not the most recent update you applied.
You mean when software is very high quality, you can get away with having no recovery. That's what I'm saying, but in reverse. If you have good recovery, you can also get away more with low quality. And that is what it does. So I'm not saying good recovery is bad given the current status quo of the system. I was talking about the future. I'm not saying, throw away what you have. I'm saying, don't make it your holy boon, because you'll lose the appetite for anything else. A LOT in the Linux world is focused on BTRFS now, from what I gather. But if you make that the height and pinnacle of your development, and start to develop all kinds of systems around it. I see no benefit in that. Cause you're starting to identify as that thing that doesn't do anything useful, but if it breaks at least you can always go back. The focus should be on functionality or benefit foremost, and not on the means to ensure that you don't lose it. Because if you have nothing, not losing it doesn't make a difference either. So I would like to just say: make sure your updates DO in fact not break systems. If testing is a problem, don't update kernels constantly. Don't push updates that can break things or are uncertain. Focus on improving the robustness and stability, resilience and splendor of the boot process. I think Linux has a LOT of ground to gain there. It could be made so much more simpler and easier to understand as well as (also because of that) more resilient to failure. If you have a system in place that pretty much ensures that nothing goes wrong (and is also very configurable) then the nerves go out of it and a lot of time is no longer wasted on fixing those issues, and you would even need to do less testing as well. DO try to create perfection and don't just rely on fail-safe mechanisms when you consider that thing are inevitable going to break.
Nah, I'll take a snapshot and wait a week thanks.
Another way forward is Fedora's atomic/rpm-ostree project. And CoreOS has a similar strategy. These are specifically versioned trees which have specific binaries in them. Anyone who deploys a particular tree version has the identical system binaries as anyone else with that tree version; compared to the very non-determinstic situation we have with package managed systems.
Reminds me of something............... reminds me of Git lol. Yes like I said, versioning. I think versioning is a great boon to any system. Right now personally I have two issues: - I don't like to use Git for the majority of my files - I don't have anything else ;-). If you really had a GOOD versioning system that you could use for EVERYTHING you wanted including a remote storage. That would be awesome right. I guess I like the tree system better. As long as you can still install what you want too.
You mess up? Go back in time, it is easier than actually trying to solve a problem. Sorry, lame and unconvincing argument against snapshots and rollbacks. Your method basically depends on the user getting a broken system somehow communicating their misery to developers who then do a better job. Users getting mad causes software quality to improve? It's not how things work.
That is just the fallacy of your open source mindset. If you created good software in the first place, there'd be no broken systems all the time. So basically you're saying: nothing will work anyway, just empower the users to recover from all of the vary and manious (many and various ;-)) errors that are going to consistently keep happening. You're basically also equating madness to feedback, or feedback to madness. If I say "Thing A and B don't work, and C just got worse" that is not madness, that is information. Also, this system you describe is already in place, it is the bug trackers. Genuine listening of developers to users in the developing world of open source software has never really happened yet though. In general users are seen as co-workers and participants in the development of the system. Hence, they are seen as someone having a duty, not someone having a right. Or, they are seen as someone having a right that can only result from them performing their duty. This implies then that using the software is not good enough of a reason to have your voice get heard. This then implies that reports of congruence of what the system does to what it should do are basically considered to be rather unimportant. This is a loss of feedback, and hence of information that would otherwise inform and inspire the development process. I say the system and the cycle is bugged and not operating very well. If users are only seen as "those people who complain" as you now just described them. Then yeah you're not really gonna bother with them will you. And if they say "well I don't like this" you go "well it's your fault". The user is always wrong. One of Linux' founding principles ;-). It's always their fault because they should always have put in more effort to use the system right. No amount of effort is ever enough, because theoretically every problem ever existing can be easily solved by just throwing infinite amounts of man-hours at it. If a minor feature doesn't work, people say (in a certain sense): just learn this language, and this system, this debugging tool, this profiler, and then after you've spend 20.000 hours learning all of that, you will be able to fix that minor error that you just reported. Good luck!!!!. That's really a way of mocking someone you know. But anyway. I don't see how users not getting mad causes software to improve either. If there is no incentive to do it. And I'm not saying anger should be the reason to change everything, but it can be a good reason to change something. Your customers (users) not getting their work done should be the PRIME reason to fix or improve something. The reality of their experience should be what you are after in the first place. If you then don't care about that experience, then why the fuck are you developing for? Madness only arises because people feel you have taken up a responsibility you are not living up to, OR not acknowledging that you have it. Madness only arises because people are not getting heard. If you do listen to your users, provide easy ways for them to give feedback, listen to complaints etcetera, people don't stay mad. People usually just want their issue to be understood because being understood means it is going to get worked on by not having to do anything else at all. Without any sort of intervention for them to be required after that. I once bought a software, then debated with the author why it sucked, he said I don't agree with everything but what you just said hit a personal chord with me, and I left it at that knowing he would do the right thing. I haven't looked back at it since. I gained the right of being critical by paying for his software. As a wannabe-user I had no right to complain, but as a paying customer, I did. So I used it for that basically. In Open Source..... it is different if it is free, and really the only people who have the right to say something are the ones to think about a solution. You have to approach it from the corner of someone willing to offer thinking power. But nonetheless, that contribution is often neglected because it may come across as judgemental. You say "I see this and this wrong" and they go "WTF you talking about you moron?". But if you can't talk about wrongness you also cannot talk about rightness. I have had this argument here before ;-). You must be willing to admit faults before you are able to fix them. You first have to know where you are at, before you know which direction to go. And a looot of people don't want to hear that there's anything wrong with their software. But regardless. That's how it works. That's how it works in Open Source. Or usually, how it doesn't work because of people's egos. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org