[opensuse-kernel] ext3 - ENOMEM on file write
Greetings, Wondering if anyone here has some experience with file writes in the kernel (specifically ext2 and ext3). I have been trying to figure this out and have no idea how this is happening. We have a case where we (Customer of mine using openAFS) have a system that is doing some successive reading and writing to a file. Here is the specific Code: --snip-- First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.) If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); (I'm not including error-checking. afs_cacheSBp is the superblock for the cache filesystem, so ext2 or ext3. afs_cacheMnt is the vfsmount for the cache FS) Then, to write to the file we do basically: (we're writing 'count' bytes from 'buf' to 'offset' in file 'filp'; I'm paraphrasing the code here, but I think I'm maintaining what it does) mm_segment_t _fs_space_decl; int code = 0; savelim = current->TASK_STRUCT_RLIM[RLIMIT_FSIZE].rlim_cur; current->TASK_STRUCT_RLIM[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY; _fs_space_decl = get_fs(); set_fs(get_ds()); if (filp->f_op->llseek) { if (filp->f_op->llseek(filp, offset, 0) != offset) return -1; } else { filp->f_pos = offset; } while (code == 0 && count > 0) { code = filp->f_op->write(filp, buf, count, &f->f_pos); if (code < 0) { code = -code; break; } else if (code == 0) { code = EIO; break; } buf += code; count -= code; code = 0; } set_fs(_fs_space_decl); current->TASK_STRUCT_RLIM[RLIMIT_FSIZE].rlim_cur = savelim; return code; It is here we are getting -ENOMEM from filp->f_op->write. Finally, normally to close a file, we do this: if (filp->f_dentry->d_inode) { filp_close(filp, NULL); } --snip-- It's not intended to be written to constantly, but in this case it probably is written to several times successively (due to certain parameters set a bit low, and the high load for these clients). I believe this function was called about 644203 times in the core I'm looking at, which means that file was written to at least around 644000 times... I'm assuming at least most of those were right after another. However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. How can we avoid this or make this much more stream-lined? Any help would be appreciated. Thanks, Cameron -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
Hello, On Mon 03-05-10 13:12:40, Cameron Seader wrote:
First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.)
If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: So we are talking about SLE10 based kernels, right? In fact these kernels do have iget() but I guess you do not want to do all the writing by hand and want to use standard write path and thus you need open file descriptor for which you need a dentry...
fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); Hmm, so about which kernel are we speaking? fh_to_dentry has been introduced only in 2.6.24...
<snip>
It's not intended to be written to constantly, but in this case it probably is written to several times successively (due to certain parameters set a bit low, and the high load for these clients).
I believe this function was called about 644203 times in the core I'm looking at, which means that file was written to at least around 644000 times... I'm assuming at least most of those were right after another. OK, so these 644000 writes succeed and then you start getting ENOMEM? Does the machine have enough free memory? If not, output from /proc/meminfo and /proc/slabinfo could help to tell you where the memory has gone.
However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. Well, the cache could be thrashing but still you'll get ENOMEM only if the kernel cannot find enough memory to pull in a page you are writing to. And that should not happen unless the machine has real problems. My personal tip would be that your code leaks some memory (or reference or so) and thus kernel really gets out of memory after enough reading / writing...
Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
On 5/3/2010 at 01:47 PM, in message <20100503194736.GH3470@quack.suse.cz>, Jan Kara <jack@suse.cz> wrote: Hello,
On Mon 03-05-10 13:12:40, Cameron Seader wrote:
First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.)
If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: So we are talking about SLE10 based kernels, right? In fact these kernels do have iget() but I guess you do not want to do all the writing by hand and want to use standard write path and thus you need open file descriptor for which you need a dentry...
No, this was my mistake. I thought the lack of an 'iget' symbol in the core meant that it wasn't available, but iget itself is just a static inline function, so it wouldn't be in there. We use iget if it's available, so we are using iget here.
fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); Hmm, so about which kernel are we speaking? fh_to_dentry has been introduced only in 2.6.24...
Yes, sorry, that's my mistake. With iget, we actually call: tip = iget(afs_cacheSBp, (u_long) ainode); dp = d_alloc_anon(tip); tip->i_flags |= MS_NOATIME; filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR);
<snip>
It's not intended to be written to constantly, but in this case it probably is written to several times successively (due to certain parameters set a bit low, and the high load for these clients).
I believe this function was called about 644203 times in the core I'm looking at, which means that file was written to at least around 644000 times... I'm assuming at least most of those were right after another. OK, so these 644000 writes succeed and then you start getting ENOMEM? Does the machine have enough free memory? If not, output from /proc/meminfo and /proc/slabinfo could help to tell you where the memory has gone.
However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. Well, the cache could be thrashing but still you'll get ENOMEM only if the kernel cannot find enough memory to pull in a page you are writing to. And that should not happen unless the machine has real problems. My personal tip would be that your code leaks some memory (or reference or so) and thus kernel really gets out of memory after enough reading / writing...
To be clear, I mean the OpenAFS cache is thrashing, not kernel memory caches et al... I just meant to say that this particular file is getting written to and read from a lot. Here is output from kmem -i crash> kmem -i PAGES TOTAL PERCENTAGE TOTAL MEM 4089940 15.6 GB ---- FREE 1587155 6.1 GB 38% of TOTAL MEM USED 2502785 9.5 GB 61% of TOTAL MEM SHARED 1591925 6.1 GB 38% of TOTAL MEM BUFFERS 34087 133.2 MB 0% of TOTAL MEM CACHED 1779819 6.8 GB 43% of TOTAL MEM SLAB 166166 649.1 MB 4% of TOTAL MEM TOTAL HIGH 0 0 0% of TOTAL MEM FREE HIGH 0 0 0% of TOTAL HIGH TOTAL LOW 4089940 15.6 GB 100% of TOTAL MEM FREE LOW 1587155 6.1 GB 38% of TOTAL LOW TOTAL SWAP 8389936 32 GB ---- SWAP USED 382 1.5 MB 0% of TOTAL SWAP SWAP FREE 8389554 32 GB 99% of TOTAL SWAP Seems like we have enough memory. Do you know why we could be getting an ENOMEM at all? Is there anything in an ext2/3 write that could require allocating a lot of memory? Do you know why an ENOMEM could be generated with this much free memory available? I don't know if there's some limit for FS/VFS-related memory that is possible to hit, or perhaps a certain type of memory is needed that is not available... etc. The way we get the dentry for the file in question is via s_export_op->fh_to_dentry(), we get the file via dentry_open(), and write to it via filp->f_op->write(). Is there anything 'bad' or unsupported that we're doing with that sequence of calls that could contribute to this? Thanks, Cameron -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
On Mon 03-05-10 14:23:50, Cameron Seader wrote:
On 5/3/2010 at 01:47 PM, in message <20100503194736.GH3470@quack.suse.cz>, Jan Kara <jack@suse.cz> wrote: Hello,
On Mon 03-05-10 13:12:40, Cameron Seader wrote:
First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.)
If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: So we are talking about SLE10 based kernels, right? In fact these kernels do have iget() but I guess you do not want to do all the writing by hand and want to use standard write path and thus you need open file descriptor for which you need a dentry...
No, this was my mistake. I thought the lack of an 'iget' symbol in the core meant that it wasn't available, but iget itself is just a static inline function, so it wouldn't be in there. We use iget if it's available, so we are using iget here.
fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); Hmm, so about which kernel are we speaking? fh_to_dentry has been introduced only in 2.6.24...
Yes, sorry, that's my mistake. With iget, we actually call:
tip = iget(afs_cacheSBp, (u_long) ainode); dp = d_alloc_anon(tip); tip->i_flags |= MS_NOATIME; filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); OK.
However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. Well, the cache could be thrashing but still you'll get ENOMEM only if the kernel cannot find enough memory to pull in a page you are writing to. And that should not happen unless the machine has real problems. My personal tip would be that your code leaks some memory (or reference or so) and thus kernel really gets out of memory after enough reading / writing...
To be clear, I mean the OpenAFS cache is thrashing, not kernel memory caches et al... I just meant to say that this particular file is getting written to and read from a lot.
Here is output from kmem -i
crash> kmem -i PAGES TOTAL PERCENTAGE TOTAL MEM 4089940 15.6 GB ---- FREE 1587155 6.1 GB 38% of TOTAL MEM Indeed a lot of free memory...
Seems like we have enough memory. Do you know why we could be getting an ENOMEM at all? Is there anything in an ext2/3 write that could require allocating a lot of memory? On a standard write path, there's not too much of an ext2/3 specific code and I don't see a big potetial for returning ENOMEM especially with this much of free memory. Looking at the generic code, generic_file_buffered_write has: if (unlikely(sigismember(¤t->pending.signal, SIGKILL))) { /* * Must not hang almost forever in D state in * presence of sigkill and lots of ram/swap * (think during OOM). */ status = -ENOMEM; break; } So maybe this could be the path we are taking?
Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
On 5/3/2010 at 03:05 PM, in message <20100503210507.GI3470@quack.suse.cz>, Jan Kara <jack@suse.cz> wrote: On Mon 03-05-10 14:23:50, Cameron Seader wrote:
On 5/3/2010 at 01:47 PM, in message <20100503194736.GH3470@quack.suse.cz>, Jan Kara <jack@suse.cz> wrote: Hello,
On Mon 03-05-10 13:12:40, Cameron Seader wrote:
First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.)
If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: So we are talking about SLE10 based kernels, right? In fact these kernels do have iget() but I guess you do not want to do all the writing by hand and want to use standard write path and thus you need open file descriptor for which you need a dentry...
No, this was my mistake. I thought the lack of an 'iget' symbol in the core meant that it wasn't available, but iget itself is just a static inline function, so it wouldn't be in there. We use iget if it's available, so we are using iget here.
fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); Hmm, so about which kernel are we speaking? fh_to_dentry has been introduced only in 2.6.24...
Yes, sorry, that's my mistake. With iget, we actually call:
tip = iget(afs_cacheSBp, (u_long) ainode); dp = d_alloc_anon(tip); tip->i_flags |= MS_NOATIME; filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); OK.
However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. Well, the cache could be thrashing but still you'll get ENOMEM only if the kernel cannot find enough memory to pull in a page you are writing to. And that should not happen unless the machine has real problems. My personal tip would be that your code leaks some memory (or reference or so) and thus kernel really gets out of memory after enough reading / writing...
To be clear, I mean the OpenAFS cache is thrashing, not kernel memory caches et al... I just meant to say that this particular file is getting written to and read from a lot.
Here is output from kmem -i
crash> kmem -i PAGES TOTAL PERCENTAGE TOTAL MEM 4089940 15.6 GB ---- FREE 1587155 6.1 GB 38% of TOTAL MEM Indeed a lot of free memory...
Seems like we have enough memory. Do you know why we could be getting an ENOMEM at all? Is there anything in an ext2/3 write that could require allocating a lot of memory? On a standard write path, there's not too much of an ext2/3 specific code and I don't see a big potetial for returning ENOMEM especially with this much of free memory. Looking at the generic code, generic_file_buffered_write has: if (unlikely(sigismember(¤t->pending.signal, SIGKILL))) { /* * Must not hang almost forever in D state in * presence of sigkill and lots of ram/swap * (think during OOM). */ status = -ENOMEM; break; } So maybe this could be the path we are taking?
A look at the core makes it look very much to me like that is what it is (hooray). Can you confirm the following? crash> print ((struct task_struct*)0xffff8103432a2080)->pending.signal $5 = { sig = {256} } SIGKILL is 9, 9-1==8, and (1 & (256 >> 8)) == 1. So, if I'm reading sigismember correctly, yes, we have a SIGKILL pending. A little C test program confirms, but I'd like to get confirmation from someone that's actually used to the linux kernel code :) I don't suppose there's any way to tell if this is caused via the OOM killer, is there? Any structures or something in the core i can analyze to see if it's been activated for some reason? Thanks, Cameron -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
On Mon 03-05-10 16:05:35, Cameron Seader wrote:
On a standard write path, there's not too much of an ext2/3 specific code and I don't see a big potetial for returning ENOMEM especially with this much of free memory. Looking at the generic code, generic_file_buffered_write has: if (unlikely(sigismember(¤t->pending.signal, SIGKILL))) { /* * Must not hang almost forever in D state in * presence of sigkill and lots of ram/swap * (think during OOM). */ status = -ENOMEM; break; } So maybe this could be the path we are taking?
A look at the core makes it look very much to me like that is what it is (hooray). Can you confirm the following?
crash> print ((struct task_struct*)0xffff8103432a2080)->pending.signal $5 = { sig = {256} }
SIGKILL is 9, 9-1==8, and (1 & (256 >> 8)) == 1. So, if I'm reading sigismember correctly, yes, we have a SIGKILL pending. A little C test program confirms, but I'd like to get confirmation from someone that's actually used to the linux kernel code :) Yes, indeed it seems the process has SIGKILL pending.
I don't suppose there's any way to tell if this is caused via the OOM killer, is there? Any structures or something in the core i can analyze to see if it's been activated for some reason? You would have messages about OOM kill in the kernel log. So unless you see messages like "Out of Memory: Kill process ..." in the kernel log, it was not OOM killer which sent the signal. Given the amount of free memory, I actually seriously doubt it was OOM killer but check the log to be sure.
Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, e-mail: opensuse-kernel+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-kernel+help@opensuse.org
More information from kmem below, attached as well to preserve columns. -Cameron
On 5/3/2010 at 01:47 PM, in message <20100503194736.GH3470@quack.suse.cz>, Jan Kara <jack@suse.cz> wrote: Hello,
On Mon 03-05-10 13:12:40, Cameron Seader wrote:
First, we're given an inode 'ainode', which should be the correct inode for the file we're looking at. (If it were incorrect, we would have gotten an error much earlier.)
If we have iget, we call iget. The 2.6.16.60-* kernels lack iget, I believe, so instead we do: So we are talking about SLE10 based kernels, right? In fact these kernels do have iget() but I guess you do not want to do all the writing by hand and want to use standard write path and thus you need open file descriptor for which you need a dentry...
fid.i32.ino = ainode; fid.i32.gen = 0; dp = afs_cacheSBp->s_export_op->fh_to_dentry(afs_cacheSBp, &fid, sizeof(fid), FILEID_INO32_GEN); filp = dentry_open(dp, mntget(afs_cacheMnt), O_RDWR); Hmm, so about which kernel are we speaking? fh_to_dentry has been introduced only in 2.6.24...
<snip>
It's not intended to be written to constantly, but in this case it probably is written to several times successively (due to certain parameters set a bit low, and the high load for these clients).
I believe this function was called about 644203 times in the core I'm looking at, which means that file was written to at least around 644000 times... I'm assuming at least most of those were right after another. OK, so these 644000 writes succeed and then you start getting ENOMEM? Does the machine have enough free memory? If not, output from /proc/meminfo and /proc/slabinfo could help to tell you where the memory has gone.
However, there would almost always be several reads of the same file between successive writes. (Again, in an 'open(); read(); close();' fashion) But they are probably all happening very quickly; I assume the cache for the stuff in this file is thrashing. Well, the cache could be thrashing but still you'll get ENOMEM only if the kernel cannot find enough memory to pull in a page you are writing to. And that should not happen unless the machine has real problems. My personal tip would be that your code leaks some memory (or reference or so) and thus kernel really gets out of memory after enough reading / writing...
Honza
crash> kmem -f NODE 0 ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR 0 DMA 4096 3029 ffff810450b00000 0 0 AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES 0 4k ffff81000001b448 3 3 1 8k ffff81000001b460 1 2 2 16k ffff81000001b478 2 8 3 32k ffff81000001b490 5 40 4 64k ffff81000001b4a8 2 32 5 128k ffff81000001b4c0 2 64 6 256k ffff81000001b4d8 1 64 7 512k ffff81000001b4f0 0 0 8 1024k ffff81000001b508 1 256 9 2048k ffff81000001b520 1 512 10 4096k ffff81000001b538 2 2048 ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR 1 DMA32 1044480 586009 ffff810450b38000 1000000 4096 AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES 0 4k ffff81000001bac8 8147 8147 1 8k ffff81000001bae0 8005 16010 2 16k ffff81000001baf8 6839 27356 3 32k ffff81000001bb10 5464 43712 4 64k ffff81000001bb28 4128 66048 5 128k ffff81000001bb40 2979 95328 6 256k ffff81000001bb58 1559 99776 7 512k ffff81000001bb70 818 104704 8 1024k ffff81000001bb88 286 73216 9 2048k ffff81000001bba0 63 32256 10 4096k ffff81000001bbb8 19 19456 ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR 2 Normal 3538944 998117 ffff810454300000 100000000 1048576 AREA SIZE FREE_AREA_STRUCT BLOCKS PAGES 0 4k ffff81000001c148 100359 100359 1 8k ffff81000001c160 159675 319350 2 16k ffff81000001c178 68930 275720 3 32k ffff81000001c190 30774 246192 4 64k ffff81000001c1a8 3179 50864 5 128k ffff81000001c1c0 44 1408 6 256k ffff81000001c1d8 2 128 7 512k ffff81000001c1f0 0 0 8 1024k ffff81000001c208 0 0 9 2048k ffff81000001c220 0 0 10 4096k ffff81000001c238 4 4096 ZONE NAME SIZE FREE MEM_MAP START_PADDR START_MAPNR 3 HighMem 0 0 0 0 0 nr_free_pages: 1587155 (verified) crash> kmem -s CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE ffff81042bcfa840 nfs_direct_cache 184 0 0 0 4k ffff81042bd0b180 nfs_write_data 768 36 40 8 4k ffff81042bd0b800 nfs_read_data 768 32 35 7 4k ffff81042bd0c140 nfs_inode_cache 1024 160639 160644 40161 4k ffff81042bd0c7c0 nfs_page 128 31 180 6 4k ffff81042c3e1100 rpc_buffers 2048 8 8 4 4k ffff81042c3e1780 rpc_tasks 384 8 10 1 4k ffff81044ba040c0 rpc_inode_cache 832 52 52 13 4k ffff81044ba04740 afs_inode_cache 896 218657 218740 54685 4k ffff81044b887080 fib6_nodes 64 5 59 1 4k ffff81044b887700 ip6_dst_cache 320 4 12 1 4k ffff81044b894040 ndisc_cache 256 1 15 1 4k ffff81044b8946c0 RAWv6 896 11 12 3 4k ffff81044b8962c0 UDPv6 896 3 8 2 4k ffff81044b896940 tw_sock_TCPv6 192 0 0 0 4k ffff810450590280 request_sock_TCPv6 128 0 0 0 4k ffff810450590900 TCPv6 1664 1 4 1 8k ffff81044ced6240 ip_fib_alias 64 11 59 1 4k ffff81044ced68c0 ip_fib_hash 64 11 59 1 4k ffff81044c86a200 dm_events 40 16 92 1 4k ffff81044c86a880 dm_tio 24 0 0 0 4k ffff81044ed8e180 dm_io 40 0 0 0 4k ffff81044fc121c0 uhci_urb_priv 80 0 0 0 4k ffff81044fc12840 ext3_inode_cache 800 90438 90730 18146 4k ffff81044e828100 ext3_xattr 88 0 0 0 4k ffff81044ed8e800 journal_handle 24 0 144 1 4k ffff81044ee25140 journal_head 96 32 200 5 4k ffff81044ee257c0 revoke_table 16 10 202 1 4k ffff810450381080 revoke_record 32 0 0 0 4k ffff81044e828780 scsi_cmd_cache 384 6 20 2 4k ffff81044edb1740 sgpool-256 8192 32 32 32 8k ffff81044edc2080 sgpool-128 4096 32 32 32 4k ffff81044edc2700 sgpool-64 2048 32 32 16 4k ffff81044ee63040 sgpool-32 1024 32 32 8 4k ffff81044ee636c0 sgpool-16 512 32 32 4 4k ffff81044ee672c0 sgpool-8 256 32 45 3 4k ffff81044ee67940 scsi_io_context 112 0 0 0 4k ffff81044fe15280 UNIX 704 13 55 5 8k ffff81044fe15900 ip_mrt_cache 128 0 0 0 4k ffff81044f692240 tcp_bind_bucket 32 51 224 2 4k ffff81044f6928c0 inet_peer_cache 128 12 30 1 4k ffff81044f6fb200 secpath_cache 192 0 0 0 4k ffff81044f6fb880 xfrm_dst_cache 384 0 0 0 4k ffff81044f6b21c0 ip_dst_cache 384 93 110 11 4k ffff81044f6b2840 arp_cache 256 2 15 1 4k ffff81044f69b180 RAW 768 9 10 2 4k ffff81044f69b800 UDP 768 14 25 5 4k ffff81044f68b140 tw_sock_TCP 192 6 40 2 4k ffff81044f68b7c0 request_sock_TCP 128 0 0 0 4k ffff81044fe55100 TCP 1536 47 70 14 8k ffff81044fe55780 flow_cache 128 0 0 0 4k ffff81044fdd80c0 msi_cache 5760 11 11 11 8k ffff81044fdd8740 cfq_ioc_pool 168 36 115 5 4k ffff81044fdd3080 cfq_pool 160 34 96 4 4k ffff81044fdd3700 crq_pool 88 4 44 1 4k ffff81044fdcd040 deadline_drq 96 0 0 0 4k ffff81044fdcd6c0 as_arq 112 0 0 0 4k ffff8104503dc2c0 mqueue_inode_cache 896 1 4 1 4k ffff8104503dc940 isofs_inode_cache 640 0 0 0 4k ffff81044fd17280 minix_inode_cache 656 0 0 0 4k ffff81044fd17900 hugetlbfs_inode_cache 608 1 6 1 4k ffff81044fd14240 ext2_inode_cache 752 0 0 0 4k ffff81044fd148c0 ext2_xattr 88 0 0 0 4k ffff81044fd53200 dnotify_cache 40 0 0 0 4k ffff81044fd53880 dquot 256 0 0 0 4k ffff81044fd591c0 eventpoll_pwq 72 1 53 1 4k ffff81044fd59840 eventpoll_epi 192 1 20 1 4k ffff81044fd6b180 inotify_event_cache 40 0 0 0 4k ffff81044fd6b800 inotify_watch_cache 72 1 53 1 4k ffff81044fd72140 kioctx 384 0 0 0 4k ffff81044fd727c0 kiocb 256 0 0 0 4k ffff81044fd78100 fasync_cache 24 0 0 0 4k ffff81044fd78780 shmem_inode_cache 816 475 495 99 4k ffff8104502a70c0 posix_timers_cache 152 0 0 0 4k ffff8104502a7740 uid_cache 128 7 30 1 4k ffff810450381700 blkdev_ioc 56 36 134 2 4k ffff81045037e040 blkdev_queue 1608 26 30 6 8k ffff81045037e6c0 blkdev_requests 288 4 26 2 4k ffff81044f40d2c0 biovec-(256) 4096 256 256 256 4k ffff81044f40d940 biovec-128 2048 256 256 128 4k ffff81044f409280 biovec-64 1024 256 256 64 4k ffff81044f409900 biovec-16 256 256 270 18 4k ffff8104502f6240 biovec-4 64 256 295 5 4k ffff8104502f68c0 biovec-1 16 256 404 2 4k ffff81044f4aa200 bio 128 256 270 9 4k ffff81044f4aa880 sock_inode_cache 704 107 150 30 4k ffff81045025e1c0 skbuff_fclone_cache 512 3 21 3 4k ffff81045025e840 skbuff_head_cache 256 278 570 38 4k ffff8104501dd180 file_lock_cache 176 4 44 2 4k ffff8104501dd800 acpi_operand 72 1403 1484 28 4k ffff8104501da140 acpi_parse_ext 64 0 0 0 4k ffff8104501da7c0 acpi_parse 40 0 0 0 4k ffff810450a5d100 acpi_state 88 0 0 0 4k ffff810450a5d780 delayacct_cache 64 237 472 8 4k ffff810450a530c0 taskstats_cache 296 5 39 3 4k ffff810450a53740 proc_inode_cache 624 1137 1254 209 4k ffff810450a50080 sigqueue 160 2 24 1 4k ffff810450a50700 radix_tree_node 536 56468 65996 9428 4k ffff810450a44040 bdev_cache 832 26 32 8 4k ffff810450a446c0 sysfs_dir_cache 80 3892 3936 82 4k ffff810450a412c0 mnt_cache 256 9551 9600 640 4k ffff810450a41940 inode_cache 592 10489 10554 1759 4k ffff810450a3d280 dentry_cache 208 518661 540417 28443 4k ffff810450a3d900 filp 256 768 2355 157 4k ffff810450a38240 names_cache 4096 5 11 11 4k ffff810450a388c0 key_jar 192 14 40 2 4k ffff810450a35200 idr_layer_cache 528 238 238 34 4k ffff810450a35880 buffer_head 88 86231 96580 2195 4k ffff810450a331c0 mm_struct 832 62 126 14 8k ffff810450a33840 vm_area_struct 184 2248 3696 176 4k ffff810450a2f180 fs_cache 64 63 236 4 4k ffff810450a2f800 files_cache 896 64 96 24 4k ffff810450a2e140 signal_cache 768 131 160 32 4k ffff810450a2e7c0 sighand_cache 2112 130 165 55 8k ffff810450a2b100 task_struct 1936 236 264 132 4k ffff810450a2b780 anon_vma 24 800 1440 10 4k ffff810450a250c0 shared_policy_node 56 0 0 0 4k ffff810450a25740 numa_policy 24 63 144 1 4k ffff810450a0e700 size-131072(DMA) 131072 0 0 0 128k ffff810450a0e080 size-131072 131072 0 0 0 128k ffff810450a0d6c0 size-65536(DMA) 65536 0 0 0 64k ffff810450a0d040 size-65536 65536 0 0 0 64k ffff810450a0c940 size-32768(DMA) 32768 0 0 0 32k ffff810450a0c2c0 size-32768 32768 5 5 5 32k ffff810450a0b900 size-16384(DMA) 16384 0 0 0 16k ffff810450a0b280 size-16384 16384 7 7 7 16k ffff810450a0a8c0 size-8192(DMA) 8192 0 0 0 8k ffff810450a0a240 size-8192 8192 47 47 47 8k ffff810450a09880 size-4096(DMA) 4096 0 0 0 4k ffff810450a09200 size-4096 4096 170 172 172 4k ffff810450a08840 size-2048(DMA) 2048 0 0 0 4k ffff810450a081c0 size-2048 2048 501 528 264 4k ffff810450a07800 size-1024(DMA) 1024 0 0 0 4k ffff810450a07180 size-1024 1024 10163 10280 2570 4k ffff810450a067c0 size-512(DMA) 512 0 0 0 4k ffff810450a06140 size-512 512 3078 3168 396 4k ffff810450a05780 size-256(DMA) 256 0 0 0 4k ffff810450a05100 size-256 256 10835 10950 730 4k ffff810450a04740 size-128(DMA) 128 0 0 0 4k ffff810450a040c0 size-64(DMA) 64 0 0 0 4k ffff810450a02700 size-64 64 133127 144314 2446 4k ffff810450a02080 size-32(DMA) 32 0 0 0 4k ffff810450a006c0 size-128 128 50244 53250 1775 4k ffff810450a00040 size-32 32 10872 11536 103 4k ffffffff80368ca0 kmem_cache 1664 144 146 73 4k
participants (2)
-
Cameron Seader
-
Jan Kara