On 11/8/2011 3:01 AM, Roger Oberholtzer wrote:
On Tue, 2011-11-08 at 08:31 +0100, Per Jessen wrote:
Curious. Why would this work? I will of course try it. But I would like to know what is different between running a script and the program direct.
It works because the name of executable changes when the script does 'exec'.
Perhaps I need something stronger than my morning coffee today. If the script name is the same, why would the name of the executable be different?
Will it 'status' and 'stop' commands to the rc script also work?
Depends on the script :-) If it uses killproc, it probably won't do what you expect, no. Basically, startproc/killproc were meant to manage single instances, not multiple.
Seems that is true. Seems a bit of an oversight, IMHO. The assumption is that any service started will always work as a single instance of the executable. In my case, I need the vblade server to make disk images available on more than one ethernet interface. The docs imply that the ethernet device given must be like 'eth0'. I do not see syntax for a list of interfaces, or even a wildcard like 'all' or 'any'. Multiple server instances are required for this. Of course, I do not have to use startproc. But as it handles tracking and killing the programs it strarts, it seemed like a good choice.
You don't really need to use startproc/killproc to have a well-integrated start/stop script that tracks the running/not-running state. You can write shell functions that do the actual starting/stopping according to whatever logic you happen to need, and that merely return with the proper exit values that are meaningful to rc_status. In my start/stop script for lxc containers, to start-up, I have to walk a tree of directories looking for config files, and run some commands based on what I find. The most immediate binary is actually just gnu screen, not even a binary that makes up the "service" itself, it just serves as a virtual console for each started container vm. Then the container starter binary itself doesn't even stay running, it just sets up the kernel space to run a vm in, and launches that containers /sbin/init in it, and the lxc-start command itself goes away almost instantaneously. To stop, I have to run a few different commands to determine which vm's are even running at that time. Mostly along the lines of looking for instances of "init" that aren't pid #1, and which actually correlate with one of the config files above. And then the command to actually stop is not killing any binary but sending a powerfail signal to the vm's init process and waiting for it to take down the processes within the vm. For status, there is a similar process of scanning for possible running vm's by looking for a few different things and correlating with the config files. This is so that it ignores anything that was not started by the same start script, which is essentially the same reason simpler start scripts use pid files. The start script needs to be sure to only kill a process it itself started, and ignore the very same process that someone may have started manually. In all cases the number of instances is unpredictable. There may be 0 or any number of valid and enabled vm config files. There may be 0 or any number of currently running vm's. 0 or any number of the currently running vm's may be associated with any of those config files. And all of these can change before during and after start-time and stop-time, so it all has to be determined dynamically on the spot right when you say start or stop or status. But the script is integrated well enough because each of those complex and dynamic tasks is written into a shell function, and the function has at it's end one or more return commands that sets the exit value to the specific values that are meaningful to the "rc_status -v" command which immediately follows. In my case, the start and stop functions always actually return success but the key one is status, where it returns a 3 instead of 0 if any configured vm's are found to be running. That, and the way that the stop case is written, prevents the host from shutting down until the vm's have all gracefully shut themselves down. As with everything there is work to be done of course, if a vm hangs it prevents the host from ever finishing it's own shutdown. The point was to show the method itself. Your script can be better and more complete than mine. Init script: https://build.opensuse.org/package/view_file?file=lxc.init&package=rclxc&project=home%3Aaljex&srcmd5=4dc6f1d81df55fa2a50c7c9a50b5cb63 (Note the web viewer hides the _ in rc_status at least in some browser/font/resolution combos. actually in my screen every single line of the script has it's own tiny scroll bar that can be scrolled down to show the _ now, it didn't used to have that some time ago.) Exit values documentation: http://en.opensuse.org/openSUSE:Packaging_init_scripts#Exit_Status_Codes -- bkw -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org