On 06/10/2016 05:36 AM, Dave Howorth wrote:
There's some difference between the production server versus the test server and development machine that is causing the problem (in the sense of making bad behaviour unacceptable - lots of lstats taking a very long time - apparently over 250 µs each). I'm just trying to thing of things that might affect the timing.
The normal difference between those and a development machine is the multi-user load. And "multi" a few other things too! A production system *will* have many users and variegated load. I've mentioned inode caching and pathname caching, which *NIX has always done well. In a heavily multi-user, mult0tasking environment such as production web server at an ISP with many Adobe 'virtual web hosts' there is going to a great deal of churn to both those. <Sidebar> I suspect that a major web site like Wikipedia and user account ISPs like the one I use do load balancing in very different ways. For a start, Wikipedia-scale sites have different servers for the graphics & CSS & JavaScript from the mainline text. Or at least different "names". Those "names" might in themselves be balanced across a number of pieces of hardware with either a round-robin-dns or a hardware load balance in front. The storage for the text of the pages will probably be a database rather than files, which can more easily be shared and which can do a completely different type of caching from what we've discussed so far. How the IMG, CSS JavaScript is stored & accessed also offers A multi user ISP such as the one I use takes a different approach, and it s quite possible the one in question follows this model as well. My ISP loads up user accounts on a single machine, be they shared vhost Apache services or via actual virtual hosts until either a space or load limit is reached, then they start with a new machine. Some services are networked, but all a single user's files are on his 'home' machine. Andy NFS sharing, any multiple access of files other than the normal sharing of binaries with Linux is fore the system, not the user applications. As with normal Linux, each user owns his own files and the login uses chroot or other to make other user's files invisible to him. As a result, only the user (or sysadmins) have access to a a user's files. Thus the whole issue of the inode in core being changed by some other user access the file makes no sense because it isn't going to happen! The worst case is a hacker braking in to the account and infecting the files; the marginal case is the files being 'accessed' by automatic backup process. What *will* happen is the in-core caches suffer churn simply because this is a multi-user system. </sidebar> Multi-user churn won't happen in a single user development environment. It can be simulated in a test environment if and only if that part of the defined test suite. While I've seen test suites that simulate 'load' by running processes, even ones that try to grab memory, I've not seen ones that deliberate attack the various SLABs and caches. "SLABs"? well yes; quite apart from the generic VFS caches, many file systems have their own inode and other SLAB caches. Run 'slabtop' or read the contents of /proc/slabinfo or 'man 5 slabinfo' and you'll see, for example, not only 'kernfs_node_cache', 'inode_cache' and 'proc_inode_cache' but also 'ext4_inode_cache' and in my case 'reiser_inode_cache' and 'btrfs_inode'. Nobody said this was simple! There's a LOT you can 'tune', or perhaps, 'upset'. But testing is a strange art. I recall one case where I was testing a web application written by a 3rd part for a client. I just clicked on the blank areas of the screen repeatedly and the application went haywire! The developers comment was, and i quote:" You're not supposed to do that". Well if users only ever did what developers thought they should and applications were installing and used n the kind of situations the developers developed them in, ten we wouldn’t be having this discussion! -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org