On 09/01/2020 14:50, Carlos E. R. wrote:
On 09/01/2020 15.32, Anton Aylward wrote: | On 09/01/2020 08:55, Carlos E. R. wrote: |> |> This is not correct, but it is a common misconception. Your |> system will have less free memory and less memory used for |> buffers/cache. A machine with a certain amount of ram and the |> same workload performs faster with swap used than without. | | ???? | | Yes, it will have less free memory, but so what? What do you need | the free memory FOR?
Buffers and cache. You need those two as big as possible. They make filesystem faster. This is measurable.
Up to a point. Beyond that it is of no benefit. In fact this isn't something you have control over in any direct manner. Many of those 'buffers' are actually code or data pages that are memory mapped disk pages. The VM system manages those. You only have indirect control over that.
And free memory is needed because Linux is constantly starting and stopping processes, and if not, it is needed to enlarge the buffers/cache space when needed.
I've discussed that. The starting and stopping makes used of shared resources, it has always has going all the way back, in my personal experience, to UNIX V5 in the 1970s. A new user logs in and get a shell .. whatever ... the code and some static data and the code is already there. Only one instance of the code for the shell ... whatever ... no matter how many users. All done by pointers. Modern Linux with VM does this on a more fine grained matter with the library modules being shared across different applications. And that's where the buffer/cache becomes the VM, because when a DOT-SO file is opened for use the disk image is mapped for pages that are loaded on demand as the execution progresses. AND ONLY THEN. So whether you call it a code page or a buffer is moot. What makes that terminology even further confused is that the whole VM system consists of a series of pages on a linked list of some nature... clean, as yet unused pages, dirty pages that are in use, dirty pages that have not been used for a while. As a page gets accessed it get pulled to the tail of the queue. The head of the queue becomes the candidate for swapping -- MAYBE, according to an algorithm that uses many variables. That a page has been 'swapped out' doesn't mean that it isn't still in memory, on a queue. In fact it might get accesses and pulled to the tail. But the shortcoming of the way this works means that its image is still out there on the swap. That doesn't get erased. There's a 'ratchet' mechanism, so to speak. More to the point, teh way this algorithm works, stuff gets swapped out when there is no shortage and no forceable need for recovered pages. This is what the 'swappiness' is about.
| I can see settings where you need it for very dynamic new process | creation, but let's face it, Linux takes the old UNIX model of | shared binaries to a fantastic degree. The reality is that for | many of us there is very little new process creation going on. | | Yes, I can see that buffer space is needed. I'm not talking about | absolute memory starvation/allocation. I still have around 2G | available to be used for IO/network buffers/caching. I'm sure, | even so, that there is already significant consumption for inode | and dns caching.
You see 2G because your Thunderbird and Firefox are not as big as you claim they are. :-P
I have 4G free this minute.
| | The reality is that if, like me, you never swap AT ALL though the | day, then what's the point of even creating, enabling swap?
Bigger buffer/cache space.
Which is, the way you are justifying it, an illusion. In actuality, not all memory is the same. There are special properties in low memory for pointers and certain types of tables that HAVE to be in low memory and they get allocated differently. There are page clustering effects that are NECESSARY so sometimes HUGE PAGES are created. The buffer/cache you speak of is not realistic and as a conceptual model, unhelpful and misleading.
| No, the point I'm trying to make is to do PROFILING to we what is | going on with your system. I'm saying that if you let | swappiness=60 then you'll start swapping when only 40% of you | memory is used. I think that is too low a threshold.
My system simply crashes and OOMS by killing swap.
You have control via VM settings of what happens in OOM conditions. Sadly the default is to scan ALL processes for candidates to kill or default to a PANIC. You can, if you read though the docco I referred to, https://www.kernel.org/doc/Documentation/sysctl/vm.txt alter that. ============================================================== oom_kill_allocating_task This enables or disables killing the OOM-triggering task in out-of-memory situations. If this is set to zero, the OOM killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed. If this is set to non-zero, the OOM killer simply kills the task that triggered the out-of-memory condition. This avoids the expensive tasklist scan. If panic_on_oom is selected, it takes precedence over whatever value is used in oom_kill_allocating_task. The default value is 0. ============================================================== and ============================================================= panic_on_oom This enables or disables panic on out-of-memory feature. If this is set to 0, the kernel will kill some rogue process, called oom_killer. Usually, oom_killer can kill rogue processes and system will survive. If this is set to 1, the kernel panics when out-of-memory happens. However, if a process limits using nodes by mempolicy/cpusets, and those nodes become memory exhaustion status, one process may be killed by oom-killer. No panic occurs in this case. Because other nodes' memory may be free. This means system total status may be not fatal yet. If this is set to 2, the kernel panics compulsorily even on the above-mentioned. Even oom happens under memory cgroup, the whole system panics. The default value is 0. 1 and 2 are for failover of clustering. Please select either according to your policy of failover. panic_on_oom=2+kdump gives you very strong tool to investigate why oom happens. You can get snapshot. ============================================================= More to the point here, you have settings that let you ANALYSE why the OOM occured.
I know what to do, and is purchasing another board that can take more ram. Meanwhile, swap in ssd has delayed that by two years at least.
-- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org