On 02/14/2016 09:39 AM, Olav Reinert wrote:
The interesting property to test is caching, meaning we expect repeated access of the same files to show increasing performance because the cache hit rate increases.
Yes but is that reasonable? In the real world, I mean? Suppose we have a web site and the server is taking $LARGENUM hits per second. All manner of files will keep being accessed as well as the one relating to pages being hit (it gets more complicated if this is a dynamic site and pages come from databases), the dot-htacess, the cgi code (perl, python, java, ruby, whatever), the static graphics file for the eye--candy. Over and over and over, the same files. It strikes me that this is a better test situation than a rigged test. its the kind of thing which, if the caching mechanism works, is likely to sell because it has demonstrable value. But it also ha to compete with the systems own ability to cache open files via the memory mapping and the page caching algorithms, and gets back to the issue of is this better served by more RAM. Somewhere, the capacity of the motherboard to support more RAM will run out. That's where running a test like this becomes valid. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org