On 05/16/2015 08:40 PM, Carlos E. R. wrote:
On 2015-05-16 02:53, Anton Aylward wrote:
On 05/15/2015 07:53 PM, Carlos E. R. wrote:
Mine has survived over a decade with no /var partition. None of my systems has it. :-)
... And /tmp ? /usr? Have you divided up /home/carlos/ ?
Not /tmp. /usr yes, for two reasons: one, to grow system space, because I don't use LVM. Two, because not only is a different partition, but also a different hard disk, which makes the overall system faster, because Linux can paralelize disk operations — since many years: it was explained in the SuSE (paper) administration or reference book, since I first saw it on version 5 or 6 something.
:-) Nothing new there, kernel thread scheduling. In fact back in the PDP-11 days not only could the kernel threads parallelise operations, the hardware disk controller could carry out parallel operations. The CPU could set up a series of linked "disk control blocks" for the 'autonomous data transfer" mechanism in the disk controller, itself a minicomputer of sorts, to read in via DMA and carry out. Of course the problem the controller only had one data channel into memory so while it could perform simultaneous seeks on each connected disk, it could only transfer data from one at a time. The DEC operating systems had code that optimized all this! PDP-11 UNIX didn't bother. It just had a simple and faster algorithms. Eventually this all made it to the VAX, and there was the famous performance war between Bill Joy coding BSD UNIX in C and Dave Cutler coding VAX VMS in assembler. Bill usually won. Any improvement Dave made Bill promptly trumped. Eventually Dave swore that he'd never code an OS in assembler again, which was why when he came to design & code Microsoft NT it was done in C. There were two reasons UNIX BSD kept trumping VAX VMS and both had to do with choosing regularity over text-book style 'optimization'. The first is that UNIX has only one file type: an array of bytes. VAX VMS optimized the file type but that led to two types of problems. The first was the kernel code needed to handle and recognise them, and the user-libraries needed to manage them. BLOAT! COMPLEXITY! Programmers had to use the right kind of open, the write kind of read, the right kind of write. And worse! When i used VAM I found that the text editor produced the wrong type of files for the C compiler to read! The second had to do with paralelism. Processes were lightweight under UNIX, they could be easily created and destroyed, whereas with VMS they had to be setr up and created when the system was generated and were heavyweight. Process switching, even kernel threads, was much more involved with VAX VMS and so, by comparison, less time was available for user computation. There were some tradeoffs but they added to the complexity of the VAX VMS kernel and so made optimization of the assembler code (aka 'spaghetti') more involved.
And yes, I do have several other partitions, for different uses, like /usr/local, /opt... But not /var, nor /tmp :-p
(Yes, I have set up /tmp on sites I prepared. No reason to do the same at home)
Wrote We've recently seen a 'maveric' snapper consume all of the root FS. You can search for the history going back well into the last century on Linux, UNIX, AIX, HP/UX and many othr platforms and UNIX variants of maverick programs consuming the root FS via /tmp and paralleling the machine. Having /tmp on a separate FS is simple security control for that and for malicious scripts and downloads the "noexec,nosuid.nodev' is also a simple and logical preventative control.
But most people are not professionals, but simply people that happen to use computers.
*sigh* I had a paper published back in the early 1970s about that. There was the London amateur computer club and people were saying that. I countered that even 'hobbyists' deserved professional grade software. What is the point of having an editor that craps out on you, that doesn't actually save what you wrote, just because you aren't a professional. Amateurs and hobbyists deserve a system that is every bit as robust and resilient as "professionals".
Many openSUSE users reformat their systems, reinstalling it, on each release. It doesn't matter thus, to calculate for decades of resilience.
This is a computer not your car. Resilience isn't about wearing down over the decades. Its about surviving an event, a maveric program, a download gone wrong consuming space, malware trying to execute or create a back door. More to the point, its a simple and easy change that is a very effective control -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org