On 2021-11-24 14:57, Michael Hamilton wrote:
An implication of this model meant that there were no processes worth monitoring because they didn't live long enough.
Was it ever really like that? Back in the 1980's there were often annoying long run processes. Mainly because processors/memory/disk was inadequate for the number of users on the machine (machines were always shared). Things like TeX runs, theorem proovers, simulation, and too many nethack users.
Yes. A PD-11/45 could support perhaps 40 users. Yes that was the case at one place I worked. I've discussed how the dual-bus and autonomous disk controllers and the 512 bytes interleaving of memory allowed that sort of performance. What you are doing is confusing 'binaries',' jobs' and 'processes'. Many of the revolutionary techniques that UNIX introduced to differentiate them back then to contrast with the way mainframes worked we take for granted these days. I recall one presentation that drew an analogy with a 'newspaper'. A reporter walks in and sits down and !POOF! a typewriter and paper and he hammers out the story, the he calls and !POOF! a boy appears, tears the paper from the typewriter and !POOF! lavishes and turns up and appears were !POOF! he hands it to a typesetter that appeared. It gets typeset then !POOF! he vanishes and !Poof! someone fixes the set to print drum and !POOF! vanishes just as !Poof someone appearers to crank the handle of the print drum and and runs off the newspaper. There the !POOF!s as the details of loading the newspapers on the trucks and driving them around to distribute them .... And so it goes. That's the model for a single reporter, but the reality is that there are many reporters, so the typewriters stay in existence but each reporter has his own keyboard and paper. The various agents like the copy boy do get re-instanced 'cos the cost of re-instancing them is low (think: disk cache). The printer stays around to deal with the copy from all the reporters. So yes, the binary of the editor (think 'vi') is shared but each reporter has his own address space - AND THAT IS A MAJOR DIFFERENCE TO IBM - own process and scheduling. That MAJOR DIFFERENCE? IBM OS/360 and similar basically had one address space. That was why a later model had the "XA", extended address, more bits to the hardware and software so that there was more space allowing more users. The model was what we might now describe as multi-threading of a single process. (Yes, that's a simplification, but I make it so we can contrast it with what was so revolutionary about the way UNIX was different.) Of course the computer world wasn't a situation where the various parties never learnt from each other. The PDP-11 idea of 'virtual memory' was different from IBMs. IBM had VM the way we do now; paging the whole address space. UNIX didn't page (until Bill Joy made it work on the VAX) but rather swapped the process out completely. This was !FAST! because the way memory and swap space was allocated and because the autonomous disk transfer would work on one data bus while the CPU was doing work in the other bus. And it worked because a _process_ was the user's DATA. The binary was shared. Someone else, someone swapped in, was using it. And before you ask, yes 'shared' stuff like the print engine that went unused for a long time could be swapped out, data AND binary. Until needed again. Of course a lot has changed since the UNIX of the 1970s and early 1980s. Microprocessors and much cheaper semiconductors and greater densities every 18 months caused many innovations. -- Your eyes are weary from staring at the CRT. You feel sleepy. Notice how restful it is to watch the cursor blink. Close your eyes. The opinions stated above are yours. You cannot imagine why you ever felt otherwise.