http://bugzilla.suse.com/show_bug.cgi?id=998850
http://bugzilla.suse.com/show_bug.cgi?id=998850#c18
--- Comment #18 from Michal Hocko ---
(In reply to Peter Sütterlin from comment #16)
[...]
It is however not doing any memory allocation by itself (at least greping
for alloc does not give any result).
Allocations done in userspace are not that interesting because those can be
reclaimed by the kernel. Slab allocations are done by kernel and their
reclaimability is more complicated. Userspace might trigger those allocations
inderectly though. Now it is a question whether there is an unexpected usage
which leads to a memory leak or the userspace just pins that memory in some
way.
(In reply to Peter Sütterlin from comment #17)
Created attachment 692785 [details]
stacktrace with gkrellm running
We can get a list of syscalls quite easily with the traces:
$ zgrep -i "=> \ SyS_access
8 => syscall_return_slowpath
7 => SYSC_connect
2 => SyS_chdir
1 => SyS_chmod
1 => SyS_chown
2 => SYSC_newfstatat
13071 => SYSC_newlstat
23795 => SYSC_newstat
490 => SYSC_statfs
61 => SyS_execve
6 => SyS_fsync
34 => SyS_getcwd
65 => SyS_inotify_add_watch
1 => SyS_mkdir
40 => SyS_readlink
2097 => SyS_readlinkat
2 => SyS_rename
nothing really surprising there. Mostly vfs ones. Just curious does echo 3 >
/proc/sys/vm/drop_caches makes any difference?
We can also check who is calling the allocator then we have basically the same
picture as previously:
$ zgrep -A 1 "=> kmem_cache_alloc" stacktrace.gz | grep -v
"kmem_cache_alloc\|--" | sort | uniq -c
53432 => getname_flags
42 => getname_kernel
164 => mempool_alloc
34 => SyS_getcwd
most callers should go via names_cachep cache. mempool_alloc users are
143 => bio_alloc_bioset
113 => btrfs_bio_alloc
30 => xfs_add_to_ioend
13 => __sg_alloc_table
sg_alloc_table_chained
nothing really that surprising either.
That being said, I do not see anything that could massively leak generic 4k
cache.
--
You are receiving this mail because:
You are on the CC list for the bug.