Bug ID 1227223
Summary podman stats doesn't show the MEM USAGE
Classification openSUSE
Product openSUSE Distribution
Version Leap 15.6
Hardware Other
OS Other
Status NEW
Severity Normal
Priority P5 - None
Component Containers
Assignee containers-bugowner@suse.de
Reporter 5p7u5x61q@mozmail.com
QA Contact qa-bugs@suse.de
Target Milestone ---
Found By ---
Blocker ---

the output shows the memory.stat files not found.  

~> podman stats --no-stream --no-reset
WARN[0000] Failed to retrieve cgroup stats: open
/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-238c8283a3a3101d8b5928256060a4db4f8a8d6c94ef286d7c1d009f8da53d61.scope/memory.stat:
no such file or directory
WARN[0000] Failed to retrieve cgroup stats: open
/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-6ed6875d0edf20661c5e993b2729ac16040aaa41e24990ed672db3a4bfa600e0.scope/memory.stat:
no such file or directory
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO    
  BLOCK IO    PIDS        CPU TIME    AVG CPU %
238c8283a3a3  flaskapp    0.11%       0B / 1.942GB       0.00%       110B /
430B  0B / 0B     1           225.755ms   0.11%
6ed6875d0edf  tumbleweed  0.92%       0B / 1.942GB       0.00%       110B /
430B  0B / 0B     1           38.935ms    0.92%

`flaskapp` is costum-built based on Leap 15.6 image and tumbleweed pulled from
registry.opensuse.org/opensuse/tumbleweed.

tried running containers with the same images in root user, have not found any
problem yet.

~ # podman stats --no-stream --no-reset
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO    
        BLOCK IO    PIDS        CPU TIME    AVG CPU %
313e5e7ed673  flaskapp    0.05%       21.7MB / 1.942GB   1.12%       2.092kB /
1.048kB  0B / 0B     1           337.402ms   0.05%

~ # cat
/sys/fs/cgroup/machine.slice/libpod-313e5e7ed6738573a583cb6082746f99fefb21460a3412c97b980a795efd988b.scope/memory.stat
anon 21700608
file 0
kernel 1040384
kernel_stack 16384
pagetables 122880
sec_pagetables 0
percpu 208
sock 0
vmalloc 8192
shmem 0
zswap 0
zswapped 0
file_mapped 0
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 4194304
file_thp 0
shmem_thp 0
inactive_anon 21696512
active_anon 4096
inactive_file 0
active_file 0
unevictable 0
slab_reclaimable 742720
slab_unreclaimable 125448
slab 868168
workingset_refault_anon 0
workingset_refault_file 0
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
pgscan 0
pgsteal 0
pgscan_kswapd 0
pgscan_direct 0
pgscan_khugepaged 0
pgsteal_kswapd 0
pgsteal_direct 0
pgsteal_khugepaged 0
pgfault 6540
pgmajfault 0
pgrefill 0
pgactivate 1
pgdeactivate 0
pglazyfree 0
pglazyfreed 0
zswpin 0
zswpout 0
thp_fault_alloc 1
thp_collapse_alloc 2

I tested them on public cloud vm (AWS & AZURE) and local libvirt-kvm, both
amd64 & aarch64, they have the same issue.


You are receiving this mail because: