Anton Aylward wrote:
Linda Walsh said the following on 05/06/2013 09:18 PM:
(which is a darn good reason not to move /tmp to being RAM based, as it gets used for large files sometimes, and .. oops. there goes your memory!...;-)
Or NOT as the case may be.
First, a tmpfs is mapped to memory in a way that slightly more efficient than a disk based FS. Yes, disk based FSs are mapped to memory, buffers, for reading writing inodes and super-blocks as well as shuffling the B-trees and indexes and more. By comparison a tmps is incredibly light weight.
Sorry -- its not. a 1TB file still takes 1TB out of the backing store whether it is on disk or memory.
Secondly, Linux uses a demand paged virtual memory so you're never going to run out of memory, for whatever value of 'never' applies. And it does apply here. If that memory is needed by a process it can be paged out to swap.
--- As someone else pointed out, it's ridiculous to have much virtual memory these days -- it's a waste of disk space -- the kernel only uses it, *significantly*, in pathological situations just before the OOM-killer is invoked.
Thirdly, when it can, and that certainly applies for executables, Linux and late model UNIX tries to "Map" a file into memory so the file is actually demand paged - just like above. Yes a programmer can open a file so its not mapped, thinking he's smarter than the system designer and knows better about that is and is not efficient, but I'd be reluctant to hire such people as that reasoning would only apply in special cases such as databases and the like.
---- You would get what you deserve. If you try to copy a memory mapped file and someone deleted it or truncates while you are copying it -- your copy process dies. Programmers usually are smarter than system designers about what their program does and needs to do. System designers will all admit, that optimal behavior is decided by the application and its writer. your example of 'databases' is exactly the case that you wouldn't open -- you'd use mapped. They are usually fixed in size and you don't want to rewrite them completely, but update fixed sized records. It's ideal for memory mapping. Are you sure you didn't get up on the wrong world this morning and maybe your double on this world is on your world where things are backwards from ours? ;-) Anyfile that can be changed by other processes and rewritten is not one you want to open using memory mapping. Can you imagine the performance penalty if someone or a library tried to do a memory move or 'garbage collection' on something that was mapped to disk? Ouch!
I think you're worrying about something that isn't a concern.
--- It's a major concern. If some idiot steals /tmp for their own selfish purposes, it will kill many apps. /var/run is for long-lived tmp files -- /tmp is for short lived tmp files... If I am copying files from one system to another and need a tmp dir -- I use the one for short-lived files. Now can you tell me how /tmp mapped to your swap will handle a 1-2 or 2-3 TB file? Do you really have that much swap allocated?
Now if we're talking about my memory starved box from The Closet Of Anxieties ... no, its still a NOT, because I'm not doing anything on such a box that involves big files. Its only a small box...
I wouldn't call a 48G server memory starved, exactly, but the case can always be made for more. yet it tops out at 192G capacity. That doesn't come close to holding files in the TB range. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org