Mailinglist Archive: opensuse (908 mails)

< Previous Next >
Re: [opensuse] apache 2.4 performance issue / processwire.
On 06/10/2016 08:53 AM, Dave Howorth wrote:
On 2016-06-10 12:49, Anton Aylward wrote:
On 06/10/2016 05:36 AM, Dave Howorth wrote:

There's some difference between the production server versus the test
server and development machine that is causing the problem (in the sense
of making bad behaviour unacceptable - lots of lstats taking a very long
time - apparently over 250 ┬Ás each). I'm just trying to thing of things
that might affect the timing.

The normal difference between those and a development machine is the
multi-user load.

But we know that isn't the issue here!

Yes, that's my point, or part of it at least.

Since that isn't the point, the kernel *should* be caching inodes and
directory paths fragments and the kernel cache latency should be greater
that the the time between the client calls to the Apache application,
even if, because it is connectionless, there is a new instance of the
service every time and the PHP process and it own cache is thrown away
every time (I realize there are ways to avoid that).

Even with something else accesses the same user's files the inodes will
still be cached.

So subsequent invocations should return from the directory and inode
cache very fast.

In fact since the subsequent lstat() calls differ only by the last
segment, the file name, the cache should contain the earlier fragments

Let me be a bit pedantic and explicit about what I mean when I say there
is something wrong with the way things are being cached.

Imagine this is C code (or even Perl code come to that) rather than PHP
and there is no caching in the application, we are only considering the
kernel cache.
What is happening is

1. lstat("/var")
this is followed by
2. lstat("/var/www")

Well the DPATH and inode for "/var" should be in the kernel cache, so
that should not take long. We get one disk hit for #1, one disk hit for #2

3. lstat("/var/www/metacafe")

Same logic as above but now for the next pathname fragment; one disk hit
for #3, the rest come from the kernel cache.

4. lstat("/var/www/metacafe/file1")

Same logic. One more disk hit.

Now we access the same but for the next file

5. lstat("/var")
served from kernel cache
6. lstat("/var/www")
served from kernel cache
7. lstat("/var/www/metacafe")
served from kernel cache
8. lstat("/var/www/metacafe/file2")
hit the disk for that

I've prefaced that by saying that it C code so as to eliminate the issue
of any caching in the application. I'm focusing on the kernel.
You can code that up and see the timing of 1..4 vs 5..8

Now lets put an interpret above that, PHP, Perl, Ruby, write the code
and see it from the command line. I realize that PHP is more a web
scripting language that Perl or Ruby but yes a command line version is

So what does it look like with the interpreter layer above?

IF the 3 interprets show a similar overhead then
all three interpreters have innards that buqqer around with their own
idea of 'optimizing' system calls and caching outside of being
explicitly told to do so by the programmer
the problem lies with the code being run under Apache (or some other web

Hmm "some other web server"?
Have you tried it with nginx or thttpd or even lighttpd ???

A: Yes.
> Q: Are you sure?
>> A: Because it reverses the logical flow of conversation.
>>> Q: Why is top posting frowned upon?

To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >