Comment # 4 on bug 1204380 from
(In reply to J���rgen Gro��� from comment #3)
> Interpreting the allocation data is not really easy, as the resulting output
> is a 5.6GB sized file. And using page_owner_sort doesn't help me a lot,
> probably because I don't know the proper parameters to extract the needed
> data.

The default of page_owner_sort should count the same kinds of allocations and
tell you how many were there as the "X times", and also sort by X, so the most
prominents should be first. So attaching say first 10k lines of the sorted
output should hopefully be enough to find the culprit.

> Am I right that entries showing a timestamp for "free", i.e. not containing
> "free_ts 0 ns", relate to memory having been freed again? I have my doubts,
> as summing up the memory sizes of those entries will result in only about
> 1GB of memory being used, which is clearly contradicting the shared memory
> size shown.

It's the timestamp of last freeing, which means the page could have been
allocated again, but the old timestamp and info stays. You'd basically need to
compare if the allocated ts is newer. The "-f" parameter does that and should
count only pages that are currently allocated, not freed. Note that specifying
just '-f' seems to destroy the otherwise implied default '-t' so you have to
pass it too, see below.

> Please educate me how to use the gathered data, or where to put the file for
> your analysis.

I'd run:
./page_owner_sort -tf page_owner_full.txt page_owner_sorted.txt

and attach first 10k lines of page_owner_sorted.txt

Thanks.


You are receiving this mail because: