Dave Howorth said the following on 06/12/2012 08:16 AM:
Istvan Gabor wrote:
I have another question. Can the operation system/programs take advantage of multiple processors? Can one application use only one processor or more? Does this depend on the specific program or how the OS sorts tasks?
Yes, the operating system takes advantage of multiple processors (and cores and threads). Linux and Unix before it have been leaders in this area.
Whether an application can use more than one processor depends on the application. Typically, an application that can do so is called "multi-threaded" or "parallelised". One important case is applications that have a graphical interface. Well written applications of this type usually use one processor for the graphical interface and at least one other processor to do whatever other processing the application does.
To some degree that is beside the point. Linux is a real multi-tasking OS. if there is a "job" that is runnable and a free processor then the OS will run that "job" on that process. OK, I'm using the terms loosely and sloppily, but the general point holds. I've noticed that my laptop runs faster than my desktop. My desktop: 3GHz single core AMD, 4G of ram My laptop: 2GHz dual core Intel, 1.25 G of ram (and old!) Neither machine "swaps"; Linux is very good at memory management :-) Ah, you say, graphics processing. Well there is that! Actually the desktop has a more modern, more capable card with a good bit more memory on board. Graphics processing can be very intensive if you have lots of overlapping and redrawing. One reason I use KDEs 'virtual desktop' with one desktop per application and each window therein maximised. KISS. (Oh right using some light weight desktop manager would be even faster ....) Hmm. I wonder if I can upgrade the CPU on my desktop to a mulitcore or do I have to upgrade the Mobo as well? No, the real question is why can't we use David Cheriton's "V System" (q.v. go google)? This was a distributed OS based on message passing where the processes, libraries and even the innards of the OS can be (redundantly if needed) spread across many machines in the network. (This would even support a disk driver that was on a different machine from the one on which the disk hardware was running!) No, not virtual machines but real subroutine level distributed processing. I have - many of us have - old machines "in the basement" that could be utilized to increase the aggregate processing power. Cheriton's work, along with that of others like Brinch Hansen on California and Morven Gentleman in Waterloo - and many others - showed that parallelism in processing was an easy route to increased computation power without the need for faster hardware[1]. And yes, that is why Dave's point about the design of the applications so they _can_ me multi-threaded is so important. You can google for a lot of the history and issues to do with this trade off, the experimentation, the various types of 'axillary processors' that have been used (e.g. for array processing), types of parallelism, off-loading of graphics processing, distributed processing and message passing, and more. I certainly found it interesting reading at the time. A lot of it has never been commercially exploited. [1] This was an old argument and queuing theory was not as helpful as one might think. There was a long standing debate between the mainframe people (think IBM) and the mini people (think DEC and others) as to whether a fast, powerful single queue server was better than a number of less powerful parallel servers. Think about that the next time you are in the supermarket or bank. Some tellers are very good, very fast without being impolite, some are just slow. At my local supermarket the slow tellers are put on the "8 items or less" lane :-) -- Every great advance in natural knowledge has involved the absolute rejection of authority. Thomas H. Huxley -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org