On 12/20/2011 2:21 PM, Christian Boltz wrote:
Hello,
Am Dienstag, 20. Dezember 2011 schrieb Frederic Crozat:
Le mardi 20 décembre 2011 à 16:05 +0100, Christian Boltz a écrit :
The initscript calls "aa-status", prints its output and returns the status/errorcode based on $? of aa-status. How can I do this (execute a command when checking the status) with systemd to get at least the errorcode?
You can't, through systemd. The service will be based on either it is running (for a RemainAfterExit=false type) or it was run based on its error code (for a RemainAfterExit=true type).
In other words: when I ask for the status it says "well, I started it, so it must still be active" - right?
I'm sorry, but just _assuming_ the status instead of _checking_ it sounds like a bad idea[tm]. It's even worse because we are talking about security-relevant services (AppArmor, SuSEfirewall) here - I'd prefer to get the real status instead of a "well, I started it..." ;-)
Please consider to implement something like ExecStatus=/path/to/status-checking-binary in *.service files to fix this issue.
Indeed. I don't know how else I could make LXC VM's start up and shut down with the host gracefully. There is no pid file, there is no process, there is however a combination of commands to run that tell you it is OK or it is NOT OK to proceed with host shutdown.
I *could* write a watchdog that essentially loops these commands the entire time from when a .service is started. KLUDGE! A backwards march of progress.
Instead of querying the system once when you want to know something, you're querying it continuously forever just so that systemd can find out via it's too-limited api? No.
The static one-shot kind of .service file like iptables: http://en.gentoo-wiki.com/wiki/Systemd#iptables Doesn't seem to apply. as someone else discovered, they do nothing when you try to run them again later.
I could make ExecStart and ExecStop scripts easy enough. They could include all necessary logic to avoid doing anything that has already been done, and do all necessary things without making any unsafe assumptions about other things that appear to have been done.
At shutdown time, as long as the overall host shutdown blocks until ExecStop returns _with an agreeable error level_, then I'll be ok.
So far so good.
Status? Yeah if only for sysadmin consistency, ExecStatus is really needed. There is no way an ExecStart could create a fake pid file that actually means anything unless it was written and updated by a watchdog just for that. And we do want the sys admin to be able to query the current running/not-running status of a service without having to remember or look up random inconsistent service-specific commands.
Another idea, it might be possible to install incron (inotify, not cron) jobs to monitor something from inside of each container vm. That could theoretically update a fake pid file any time any vm's status changed, without requiring a 24/7 watchdog other than the kernel itself via inotify and incrond. incrond runs 24/7 but it's generically useful like cron so that's a tolerable process. But the incron jobs themsevles only fire off when the conditions the incrontab entry specifies occur (file creation, name change, etc). It would be a bit delicate and I don't like that since the proper startup/shutdown of whole entire vm servers is even more critical than any normal single service.
You might be able to do some kind of incron based trick like that for your service too. Until ExecStatus exists, at least it would avoid having to write a kludgy forever looping watchdog script.