On Sun, 27 Sep 2015, Anton Aylward wrote:
It depends on how you define 'efficient':
The Windows registry represents a SPOF; it has no documentation.
My experience dictates that using the registry was never hard. When you access it is usually based on some tutorial and then the stuff you want is easy to find. These tutorials (mini snippets) of which there are really thousands upon thousands for every possible use case could constitute as "documentation". I certainly don't understand everything about it. But it wasn't that difficult either. I don't like the nature of the centralized thing because as a monolithic thing, it can break in its entirety, like you say.
Its all-in-one nature means you can't address 'just one thing' very easily.
That's not really true, but you can't have any data safety based on that concept the way it is done. How are you going to back it up? There are commands for that but it is not obvious. The general sense is that if it gets corrupt, it gets corrupt as a whole, as you indicate (SPOF).
Its location means that the rootFS can't be made RO for embedded/ROM'd systems.
Not sure whether that is a problem and it could probably be designed around. I don't think Windows/MS runs into problems like that.
But no matter what optimizations Cutler did at the machine code level, Joy matched him by using a high level language. Simple design principles, simple architecture won out.
It's really funny how you call C high level :).
You simply can't generalise about UNIX/Linux when it has seem so many variations and enhancements. But it always starts with simple algorithms until a clear need for something better comes along.
I think you have a bit of a romanticised or embellished view of the (history) of Unix.
Joy's way of address /etc/passwd was one example. Having a simple text file, searched linearly works for a small number of users. The overhead of a "database" management system doesn't make sense for a single user system (as most workstations are) with a few background processes. It doesn't make sense for a small department, perhaps under 50 users.
Depends on how much overhead that database is. I think a simple DB is really no overhead at all. I mean how hard is it to create a simple fixed-size record scheme. When programming with a struct-language like C or pascal, defining structs and writing them or reading them from disk is really peanuts. But it is generally just rather friendly in Linux/Unix that you can use command line tools to operate on them (the files) which is which I don't like journalctl (SystemD) at all. But it is also mostly because the tools to operate on "database storage" when it would exist, would or are also not adequate in Linux. Good user interfaces often go missing. The number of good nCurses applications that use menu and colour, I can count on one hand (that I know of). So if you can't rely on your text-manipulation tools (that are the only power of Linux, so to speak) you are pretty much left without. But seriously a database doesn't have to be MySQL and not even SQLite. If you seriously had good tools (preferably graphically) that were user friendly, easy to use and remember, and preferably according to some standard in the Unix world, having a database really wouldn't matter. In fact, a plain text file is in fact already a database. The more you add record structure to it which eases automation, the more it turns into a DB. But editing a text file by hand is just much easier than remembering command line commands / tools and their syntax to operate on something more complex. I wouldn't mind a binary format for that though and I even wouldn't mind a binary format for logs, it's just that the access methods must be nice and Unix friendly. What it comes down to that if you had binary files for several important features of a system, you would need an ncurses app with menu structure and good interface to edit those files.
For a school, college, with hundreds, perhaps thousands of entries, that's another matter.
I don't think a system with few users would really suffer from any performance overhead if the number of users was so small ;-). But I mean. I'm just thinking of Java. Manipulating collections and classes in Java is pretty much a given. You don't need anything special to store something in an in-memory object database. Many classes perform this function. It's just that, how are you going to write it out to disk (if not serialisation).
Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent.
It's what I said: not beauty of design but speed of implementation.
UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host.
This is actually a very weird system. Even if computers display stuff, they should still be called clients. An X-server running on the client, and clients running on the place that hosts the applications, is very counter-intuitive.
But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared.
Which is a liability and a danger. The result of which is that the entire system must be congruent with itself, and it introduces that SPOF that you talked about. If a variety of different applications all have to use the same library collection, it becomes nearly impossible to do anything fancy because you constantly have to ensure that nothing breaks something else. Which results in the Unix/Linux package systems (I only have Linux experience). And it is a LOT OF WORK to maintain a package system and I think it is a great waste of time, as well as all the countless small updates that just introduce the smallest of new versions, minor minor versions being updated and requiring a system update. This whole release early release often just stinks. It means you are forever stuck in a development process that never completes. There is never a finished product. There is never a real release. A real release should be able to deal with new requirements by relying on its chosen library subset. It should not have a need to be constantly updated for feature or whatever improvements. Only bug fixes and the like. And even that should be taken care of in advance, not after the fact as it is now. This model supports unfinished products that will get their bugs fixed by users after the thing has already been released. And so you get this constant stream of unneccesary updates. In a commercial application you can just install it and it will work fine for a number of years without issue and without the need to constantly patch. It is a lot less work in the long run. This Linux development model is just a very bad use of time. It seems to save time in the short run (by involving users etc.) but in the end it is really a very bad and inefficient and also ineffective use of time in the real long run, that actually matters. Also allowing for multiple versions of libraries (except for some core set that follows the "release once and then keep it stable" model) means your applications can ensure their own consistency and will be able to ensure they can run on pretty much any system, regardless of "package manager perfection". So the sharing everything approach is really just a weakness and not a strength and you don't even need it for the client-server architecture you espouse. Or promote. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org