On 30/11/17 05:00 AM, Wols Lists wrote:
See Anton's definition of swapping - the COMPLETE process gets dumped to disk.
As I understand paging, PART of a process gets pushed aside to make room.
What DIFFERENCE it makes, I have no idea, but the underlying techniques are fundamentally different.
It makes a fantastic difference in the demands it paces on the hardware. Paging, which is actually short for 'demand paging' means you can pull many tricks with performance, but, as with everything, it is a compromise. This starts with the memory being accessed being 'not there'. Well, actually the mapping says that the referenced page hasn't been loaded. So you get a fault. The OS has to go off an get the relevant page of the code (or data) from disk and while it is doing so that process is suspended and another one can run. BUT, and this is the big issue, there needs to be the hardware capability to restart the instruction that caused the fault. THAT is non trivial. It is also a capability the PDP-11 didn't have. So the PDP-11 UNIX worked as 'rill in/roll out of the whole process. Today's Linux doesn't need the whole process space loaded. The request accessed pages of a process tend to stay in memory. "Tend to" is the operative. There is a complex queuing system that has a number of control parameters. Infrequently accessed pages end up at the end of the queue and migrate off to the disk; this is 'paged out'. The ones in make up the 'resident set'. This mapping technique is what lets programs share libraries. it is all about - yes, you guessed it - pointers. :-) Yes, the way the code that makes up a program, where the pages are, how much 'jumping around between pages' it makes DOES matter. The Principle of Locality becomes VERY important. It is nice to have the start-up/initialization code be on a page that gets swapped out and ha nothing else reference anything on that page :-) making use of the libraries is a Good Thing. making use of things already loaded, like the shell, is a Good Thing as it references pages already paged in. That is about what is termed "The Working Set", what tends to stay in memory or is required to stay if you expect reasonable performance from the program. As it says at https://web.stanford.edu/~ouster/cgi-bin/cs140-winter12/lecture.php?topic=th... <quote> Working Sets: conceptual model proposed by Peter Denning to prevent thrashing. .... What happens if memory gets overcommitted? Suppose the pages being actively used by the current threads don't all fit in physical memory. Each page fault causes one of the active pages to be moved to disk, so another page fault will occur soon. The system will spend all its time reading and writing pages, and won't get much work done. This situation is called thrashing; it was a serious problem in early demand paging systems. </quote> I would delete "early". Back to 'swapping' for a moment: One advantage that the PDP-11 had was the dual bus plus the smart disk controller with its own DMA & MMU. The swap area for a process was a continuous area of disk so the swap was a simple start, range, GO! Stock V7 had a disk s arm scheduler that tried to be smart in its queueing. I found that if you simply tweaked it so that swap-out immediately went to the head of the queue the performance increased. Sadly, with virtual memory stems it is not so simple as it depends too much on the program mix, the design of the programs and more. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org