On 06/11/2018 11.51, Liam Proven wrote:
On 05/11/2018 19:38, Carlos E. R. wrote:
I can't tell any difference.
Depends on the application.
True, I'm sure.
Yes, it will wear up faster, but the alternatives
are slow and get on my
nerves... this way I delay purchasing a new computer.
:-) That sort of makes sense.
Although my policy for avoiding buying new computers is to only buy old
computers. It saves me a lot of money... ;-)
I buy new but not the most recent design. This motherboard I bought with
a Quad Core 2 processor, what was a large amount of ram at the time (the
max the board allowed), 8 GB of DDR3. The idea was not top speed, but
data movement. MSI P45 Diamond, MS-7516. Several hard disk sata ports.
December 2009 and still working fine...
All my computers I had, had to be replaced because not enough memory.
Mind: since some kernel version swap became very
slow. I think it
happened when going from 13.1 to leap 42.2. I suspect that swap is
"fragmented" (i/o was just few megabytes, while the disk is capable of
I don't think swap _can_ get fragmented... but it's fairly easy to
remove it and re-create it, even on a running system.
Not that way. It is the memory itself which is fragmented. If a process
is requesting a number of chunks to store things, all the chunks it gets
might not be contiguous. Consider firefox. Even if it stores each tab in
a single chunk of memory, the next tab will not be contiguous, it may be
some other process. Now, surely each tab needs many chunks, and I use
many tabs. It may be chunk 1 from tab 15, then chunk 20 of tab 16, etc.
When one tab awakes, it has to restore chunks of memory that are not
contiguous, and also not stored contiguously on swap, which will use
some type of memory map.
Not fragmented in the same sense as a filesystem, but in the sense that
to recover for example a tab from swap it will have to recover a large
number of chunks that in all probability will not be contiguous.
I could hear the disk head moving like mad when switching apps or tabs,
the app waiting for I/O (I can see that in an XFCE applet), and that
partition I/O going around 1 MB/S, thus the disk was not performing
My hypothesis I call swap fragmentation, or virtual memory
fragmentation. Maybe it has another official name.
It is not an issue if swap is on SSD, but on rotating disk it is a lot.
Same as dumping to text the systemd journal is a very slow operation,
measured in many minutes on rotating rust. We "blamed" the developers
having good modern hardware, ie, SSD, thus not testing on iron ;-)
I considered it.
I know it sounds crazy, but I think the proof that the idea works is
that it was introduced as standard in Mac OS X as of version 10.9
The OS X implementation is slightly superior, inasmuch as it both
compresses into a swapfile in RAM, then writes the compressed image into
disk swap. In Linux terms it's a combination of ZRAM + ZSwap + ZCache.
The tech is also in Win10:
And in ChromeOS, Android and IBM AIX.
So it's pretty mainstream stuff now. Multicore processors help a lot --
a lot of code is still single-threaded, and with ZRAM, idle cores can be
used to do the compression, while other cores are busy.
I will think about it again :-)
I use clamav because I like finding out if I am
sent some virus garbage,
mostly intended for Windows, of course.
I appreciate that but it's a high price to pay on a memory-constrained
Yes... I want my cake and eat it O:-)
The trick to send it to swap kind of works :-)
top - 11:18:55 up 20 days, 1:31, 2 users, load
average: 0,25, 0,29, 0,40
Tasks: 547 total, 1 running, 545 sleeping, 0 stopped, 1 zombie
%Cpu(s): 1,4 us, 1,0 sy, 0,0 ni, 97,5 id, 0,1 wa, 0,0 hi, 0,0 si, 0,0 st
KiB Mem: 8174460 total, 7695600 used, 478860 free, 1126180 buffers
KiB Swap: 25165820 total, 6592928 used, 18572892 free. 2606064 cached Mem
PID USER PR NI VIRT RES SHR SWAP S %CPU %MEM TIME+ COMMAND
2437 vscan 20 0 44472 456 32 964 S 0,000 0,006 1:26.22 freshclam
4502 vscan 20 0 864260 86628 1316 476032 S 0,000 1,060 10:29.83 clamd
4513 vscan 20 0 168220 3028 1804 57344 S 0,000 0,037 0:09.78
30348 vscan 20 0 169792 27068 4920 36624 S 0,000 0,331 0:00.41
31185 vscan 20 0 169880 19976 16 38888 S 0,000 0,244 0:00.72
Ideal would be to start clamd when needed and kill it automatically with
a timeout. Maybe systemd can do that, but I failed to do it.
I may just move amavis+clamd to another computer.
Cheers / Saludos,
Carlos E. R.
(from 42.3 x86_64 "Malachite" at Telcontar)