On 12/26/2011 11:04 AM, Cristian Rodríguez wrote:
On 26/12/11 12:59, Claudio Freire wrote:
Suppose something's wrong and ExecStop (which will try a graceful shut down of the VMs)
doesn't work. Then you have to forcibly stop the VMs.
How would systemd do it?
It will send SIGKILL to the process mentioned in ExecStart
What process? There is not always any such thing as a process that implements the "service". And yet there IS a "service" and it has to be "started" and it has to be "stopped" _gracefully_ and there IS a "status". And these are all done by combinations of things to do and to check that results in the overall action and the testing if the overall action has happened or not, gracefully or not, in-progress or not.
ExecStart in the LXC container vm example would be a script that would scan a directory of config files, and depending on their contents, possibly launch a few processes for each configured VM.
Those processes might be anything. They might be a screen session which runs lxc-start which runs /sbin/init inside the container, leaving behind only the screen process and the init process and whatever the init process started. They might be a single sshd process, no screen, no init. they might be anything. But in all cases the parent script that launched all the vm processes is gone immediately as soon as the last configured vm is started.
In this example there are processes that you can search for at shutdown time, but they are unpredictable. You must run a script that searches for them and deals with any that are found. It's not as unreliable as it sounds because although they are unpredictable because they are user configurable vs fixed, the config files can be read and there are tools to show the status of all LXC containers even if they were manually started without the normal configs and init script.
So at query-time, there IS a meaningful and reliable answer to get, by running a script. And at shutdown time there IS a proper graceful way to shutdown and it too is by running a script.
There is NO parent daemon that runs the whole time and manages all vm's other than the kernel itself.
As I said before, a watchdog daemon could be written that would do all the same stuff ina perpetual loop, and that would then end up being something systemd could watch the way it expectes to. But why should we have to either write such a thing in the first place, or have it running 24/7 on the system just to satisfy systemd? This isn't one of those cases where systemd can claim "Well even though it worked for decades the service was always broken so you should fix it anyways and then it will work with systemd also just as a natural consequence of fixing your broken service." The service is already just fine, and it would be simply systemd dictating that the service do something extra that it doesn't need for no other purpose than to allow systemd to avoid having to do something to handle situations that exist, that it doesn't want to acknowledge.
Any such watchdog would just be a downgrade in the overall system efficiency, reliability, predictability.
You need ExecStatus