Not to ignore, preclude or pre-empt anything esteemed colleagues Andrei or Xen have said, but ... Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote:
Same thing with 'config' files. Unix is till using .rc/.config files, but MS switched to putting all config files for a system or a user in 1 place -- a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
It depends on how you define 'efficient':
The Windows registry represents a SPOF; it has no documentation.
Wrong and wrong. 1: it is spread out into about 10 different sub-files with user-specific data stored/user. The physical structure is full documented. The usage of the registry by 3rd parties is as documented as the usage of linux files are by every 3rd party app (enough said there!)
Its all-in-one nature means you can't address 'just one thing' very easily.
--- You ignore my statements of "it not being perfect", and it could still be improved upon.
Its location means that the rootFS can't be made RO for embedded/ROM'd systems.
The location isn't hard coded as part of the format. User-specific registries stored in user-profiles that can be anywhere is proof of that. Deciding where to put something is not part of the underlying registry specification.
Eventually Cutler was forced to admit Joy was right to use C.
Just like MS's uses for their registry.
For a school, college, with hundreds, perhaps thousands of entries, that's another matter. Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent.
This is not to deny that eventually you need a more conventional database, and may need to share the IAM (identity access management) mechanism across many platforms. But that's a different problem.
---- The registry's backend is not fixed. Separating hives, using 'layers' in the registry, and adding of security (after the fact), and per-user redirection (ala pam's latest per-user instantiations for multi-level security) are all things that were in the registry from Vista on.
We've also seen some simple speed-ups. Inode caching, pathname caching and more. Something like name<=>uid mapping is a perfect example of caching.
Which the registry has -- where it is needed -- in some cases, for OS-components, it's worth it to keep the entire thing in memory with an on-disk journal for recovery -- just like linux's most modern file systems.
One thing that makes 'efficiency' considerations under UNIX/Linux different from Windows is where the code executes.
UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host.
Windows has similar with remote desktops...and has had such since before 2000 -- remember win started 12-15 years after linux.
With Windows in a similar environment, its the other way round. The terminal is the client, the host the server. The server supplies the code and its executed on the not-so-dumb terminal.
That's only 1 config. Remote desktops solutions being the extreme to provide all computing on the server, vs. using mobile user profiles to allow for local caching of frequently used data. MS has an array of levels of centrality vs. spread-out design based on customer needs.
Now along comes the VDI model, which superficially appears to be more like the traditional UNIX way; the terminal are 'thin clients', display engines, and the code executes as a Windows Virtual machine on the host.
That's only a latest (and not necessarily the most efficient or best) to provide another option. All of these models are still provided in MS. But in linux? Show me a major distro that supports X-terminals out of the box & full virt on a server w/dumb client, OUT-of-the-box in linux.
But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared. The Linux Terminal Sever project http://www.ltsp.org/ https://en.wikipedia.org/wiki/Linux_Terminal_Server_Project follows this old model. Its much simpler than the VDI/"virtual machine per user" model. More efficient and more effective.
Now you are defining a specific use of the words 'efficient and 'effective' -- with no point, as MS suports both.
I've been to presentations given by CISCO and and accelerated storage vendors and been amazed at how they do their cost accounting and how they need incredibly powerful and memory intense hosts for this, and amazingly fast networks. If I didn't know better I'd think it was there solely to sell-sell-sell more and more expensive equipment.
Vs. the current linux fad of forcing everyone into cisternd. Linux doesn't supply options or previous compat, MS -- you can still do data storage on FAT file systems.
What is deceiving about the LTP approach is that 'terminals' are old PC. They may only be running as X-terminals but we have so much computing power going spare we get a mistaken impression that 'dumb terminals' are really computers.
Is it efficient? I have a friend that runs a LTP variant at his office with just ten stations out of a Dell tower with just 8Meg of memory and dual core processor. The network is a second hand D-link switch. For the most part its office processing; yes OO/LO modules share a lot of common code, but there's also Thunderbird and Firefox and Google calendar. He talks abut Google a lot since that means mobile/remote and that means salesmen in the field. But I showed him FF, AquaMail and other stuff on my tablet. GoogleDocs is great, but can he run it on his server?
MS is deprecating some of the old methods to move (as is linux) to a pay-as-you-go plan. But then Gates is no longer at the help. Balmer was all about increasing busines profits, and ignoring the user. Gates tried for more balance. Balmer and lackies = Poettering.
Running in the googleplex may not be as 'efficient' as completely 'local' but its seductively convenient and hence effective.
And how was that different than DCOM/remote RPC? Restful/ajax/googleplex == redesign of MS's distributed computing 15 years later (and MS's design was based on Unix corba which never got off the ground). Until the 'everything in the cloud' movement took off because everyone saw that monthly-service fees were the way to long term profitability with local PC's reduced to game-consoles (locked in by TPM/trusted execution and secure boot) are the same as Ms's Palladium suggested 15 years ago), MS supported ALL of the previous paradigms -- which linux has never done. P.S. I'm not a MS-supporter, I hate MS in so many ways, but linux is moving to the worst of MS's ideas with no choice. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org