On 06/25/2014 11:19 AM, Carlos E. R. wrote:
On 2014-06-25 16:33, Anton Aylward wrote:
On 06/25/2014 09:35 AM, Carlos E. R. wrote:
As they say "YMMV" As I say "Context is everything"
Absolutely :-)
I would consider a tmpfs only if I could limit the RAM usage to something small, and force overflow to swap (or disk directory) early. That is, as a specialized disk cache for a special directory. An hybrid.
Years ago, Mike Tilson wrote a disk driver for V7 UNIX that kept the inode table in memory. Its since been outdated by better inode caching and by pathname caching and more. Perhaps what we need isn't so much a tmpfs as a better trade-off of the way memory is used. Perhaps a FUSE can do this, but it would need to be metricated. Many imaginative ideas don't hold up in the real world.
Years ago, in MsDOS, I had a third party disk cache that could be applied to a single partition, IIRC.
That's basically it, dedicate a cache with fixed ram space to a single directory, in this case, "/tmp". Most accesses would work only in RAM, it would be persistent, and size would not be a problem.
Yes but is it worth it? We have inode caching. We have file pathname lookup caching. We have page caching that amounts to file content caching. The issue is this: * Is it smart enough? * Would giving preference to /tmp for any of the above degrade performance in other areas? Years ago I worked on a project where the PM didn't understand 'systems'. He didn't understand that C deals with dynamic variables on the stack, he still thought of C as 'Fortran with semicolons' and required that all variables be global. That buggered up many nested loops! I had to metricate the compiler to show that the global segment was via a pointer. This was on the VAX. Global variables were no faster than locals. Another of his idiocies was a lack of understanding of how virtual memory and page caching works. He required each program to be locked in memory. Think about that. I made sure that my code was small (and hence fast) and so small that the memory allocators pretty much ignored it. Because I made heavy use of shared libraries my read code only amounted to a couple of the 4K pages. So my question becomes this: unless you have lots of memory (so that you never even swap) then what is the advantage in this? The system is already caching the inodes and paths you use a lot. Your machine might want to swap out unused apps to speed up disk access of running ones. I think this is another case where the machine knows more about what's going on than you do in the 95% case. Yes, we have the fringe use-cases like the phone/image rendering that uses big, persistent temp files, and the development situation where the compiler is using small transient ones. The sysadmin should pay attention. And yes there are ways to tune this. In the limiting case we should be able to creat a tmfs and set TMP=/var/specialtmp in the application environment. But will it be honoured? See 15.1.7 http://doc.opensuse.org/products/draft/SLES/SLES-tuning_sd_draft/cha.tuning.... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org