On 09/27/2015 03:53 AM, Linda Walsh wrote:
Same thing with 'config' files. Unix is till using .rc/.config files, but MS switched to putting all config files for a system or a user in 1 place -- a registry hive. They didn't do a perfect job, but if you have the registry mounted as a file system as on 'cygwin' or as is done in /proc or /sys, you still have file-like access to small config+setting files that can be an interface to a system library or 1-or-more files. MS went to that format about 2 decades ago, and while it could be improved upon, it's still more efficient in terms of speed and storage than 100's or 1000's of tiny files scattered over a disk.
It depends on how you define 'efficient': The Windows registry represents a SPOF; it has no documentation. Its all-in-one nature means you can't address 'just one thing' very easily. Its location means that the rootFS can't be made RO for embedded/ROM'd systems. Along the way, UNIX has implemented performance improvements while maintaining clarity. Back in the early 1980s, there was the VAX/VMS vs Berkeley UNIX4.2 battle. Dave Cutler on the one hand and Bill Joy on the other. Cutler had a very 'optimized' OS: VMS used different file types each optimized, some indexed. Many of its internal mechanisms used databases. UNIX has plain linear files, arrays of bytes. But no matter what optimizations Cutler did at the machine code level, Joy matched him by using a high level language. Simple design principles, simple architecture won out. Yes, UNIX used many design strategies rather than code optimization. Eventually Cutler was forced to admit Joy was right to use C. You simply can't generalise about UNIX/Linux when it has seem so many variations and enhancements. But it always starts with simple algorithms until a clear need for something better comes along. Joy's way of address /etc/passwd was one example. Having a simple text file, searched linearly works for a small number of users. The overhead of a "database" management system doesn't make sense for a single user system (as most workstations are) with a few background processes. It doesn't make sense for a small department, perhaps under 50 users. For a school, college, with hundreds, perhaps thousands of entries, that's another matter. Joy adopted the simplest change. Rather than a full blown database he used the same old code but split the file into hierarchy based on the first letter. Yes, the tree wasn't balanced, but it represented the smallest code change (and hence easier to test and verify and convert), and was completely transparent. This is not to deny that eventually you need a more conventional database, and may need to share the IAM (identity access management) mechanism across many platforms. But that's a different problem. We've also seen some simple speed-ups. Inode caching, pathname caching and more. Something like name<=>uid mapping is a perfect example of caching. One thing that makes 'efficiency' considerations under UNIX/Linux different from Windows is where the code executes. UNIX grew up with dumb terminals. The code executed on the host, the 'mainframe' and the dumb terminals did the display. When X came along the terminals were display servers and the client code, the application ran, as it always had, on the host. With Windows in a similar environment, its the other way round. The terminal is the client, the host the server. The server supplies the code and its executed on the not-so-dumb terminal. Now along comes the VDI model, which superficially appears to be more like the traditional UNIX way; the terminal are 'thin clients', display engines, and the code executes as a Windows Virtual machine on the host. But this is NOT like UNIX. The UNIX way shares everything. Yes, each use has his own process, but all the libraries and binary images are shared. The Linux Terminal Sever project http://www.ltsp.org/ https://en.wikipedia.org/wiki/Linux_Terminal_Server_Project follows this old model. Its much simpler than the VDI/"virtual machine per user" model. More efficient and more effective. I've been to presentations given by CISCO and and accelerated storage vendors and been amazed at how they do their cost accounting and how they need incredibly powerful and memory intense hosts for this, and amazingly fast networks. If I didn't know better I'd think it was there solely to sell-sell-sell more and more expensive equipment. What is deceiving about the LTP approach is that 'terminals' are old PC. They may only be running as X-terminals but we have so much computing power going spare we get a mistaken impression that 'dumb terminals' are really computers. Is it efficient? I have a friend that runs a LTP variant at his office with just ten stations out of a Dell tower with just 8Meg of memory and dual core processor. The network is a second hand D-link switch. For the most part its office processing; yes OO/LO modules share a lot of common code, but there's also Thunderbird and Firefox and Google calendar. He talks abut Google a lot since that means mobile/remote and that means salesmen in the field. But I showed him FF, AquaMail and other stuff on my tablet. GoogleDocs is great, but can he run it on his server? Running in the googleplex may not be as 'efficient' as completely 'local' but its seductively convenient and hence effective. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org