Comment # 8 on bug 960118 from
Another condition is that I used the following scripts to run corosync binary
got more that 60 times, there is no error, while use systemctl could occur the
issue:

#! /usr/bin/bash

while [ 1 -eq 1 ]
do
    corosync -f &
    rtn=$?
    echo "corosync -f& returned $rtn"
    if [ $rtn -ne 0 ]; then
        exit $rtn
    fi
    sleep 5
    corosync-cfgtool -s
    sleep 5
    killall -SIGTERM corosync
    #rm -rf /var/run/corosync.pid
    #killall -9 corosync
    rtn=$?
    echo "killall -SIGTERM corosync returned $rtn"
    if [ $rtn -ne 0 ]; then
        exit $rtn
    fi
    sleep 10
done

and from the log I found the following record:

Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [MAIN  ] main.c:242 Node was
shut down by a signal
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29618]: Starting Corosync Cluster
Engine (corosync): [FAILED]
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:373
Unloading all Corosync service engines.
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipc_setup.c:452
withdrawing server sockets
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipcs.c:229
qb_ipcs_unref() - destroying
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync vote quorum service v1.0
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipc_setup.c:452
withdrawing server sockets
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipcs.c:229
qb_ipcs_unref() - destroying
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync configuration map access
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipc_setup.c:452
withdrawing server sockets
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipcs.c:229
qb_ipcs_unref() - destroying
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync configuration service
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipc_setup.c:452
withdrawing server sockets
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipcs.c:229
qb_ipcs_unref() - destroying
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync cluster closed process group service v1.01
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipc_setup.c:452
withdrawing server sockets
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [QB    ] ipcs.c:229
qb_ipcs_unref() - destroying
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync cluster quorum service v0.1
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [SERV  ] service.c:240
Service engine unloaded: corosync profile loading service
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [TOTEM ] totemsrp.c:3325
sending join/leave message
Jan 13 13:52:16 pacemaker-cts-c3 corosync[29627]: [MAIN  ] util.c:131 Corosync
Cluster Engine exiting normally
Jan 13 13:52:16 pacemaker-cts-c3 systemd[1]: Dependency failed for Pacemaker
High Availability Cluster Manager.


You are receiving this mail because: