I have now enabled systemd more or less successfully in an M5 running in VBox. Some things and probs that I have ran into: This installation runs in Oracle VBox and I'm used to run also their guest additions. So normally always after a kernel installation I run a script of theirs, 'VBoxLinuxAdditions.run' which has always worked. It did now, but not when booted with (F5) systemd. I booted the default way then and this script did its job normally. A reboot with systemd and I was in business :) There have been some persistent errors in '/var/log/messages' though: Sep 3 11:08:44 121 SuSEfirewall2: Setting up rules from /etc/sysconfig/SuSEfirewall2 ... Sep 3 11:08:47 121 SuSEfirewall2: Error: iptables-batch failed, re-running using iptables Sep 3 11:08:48 121 SuSEfirewall2: Error: ip6tables-batch failed, re-running using ip6tables Sep 3 11:08:50 121 SuSEfirewall2: Firewall rules successfully set The above and some others appears frequently: Sep 2 12:03:48 a88-112-25-185 gnomesu-pam-backend: pam_systemd(gnomesu-pam:session): Failed to create session: Invalid argument Sep 3 11:41:09 121 sudo: pam_systemd(sudo:session): Failed to parse message: Message has only 3 arguments, but more were expected Sep 3 11:10:11 121 systemd[1]: Unit NetworkManager.service entered failed state. Sep 3 11:10:11 121 systemd[1]: Unit dev-sda1.swap entered failed state. Sep 3 11:10:11 121 systemd[1]: Job remote-fs.target/start failed with result 'dependency'. Sep 3 11:10:11 121 systemd[1]: Unit data5.mount entered failed state. Sep 3 11:10:14 121 systemd[1]: Unit home-mcman-waxborg_home.mount entered failed state. Sep 3 11:10:14 121 systemd[1]: Unit data1.mount entered failed state. Sep 3 11:10:14 121 systemd[1]: Unit data75.mount entered failed state. Sep 3 11:11:40 121 systemd[1]: Unit NetworkManager.service entered failed state. Sep 3 11:11:40 121 systemd[1]: Unit dev-sda1.swap entered failed state. Sep 3 11:11:41 121 systemd[1]: Unit data1.mount entered failed state. Sep 3 11:11:41 121 systemd[1]: Unit nfs.service entered failed state. Then I thought I'd boot the default way to see how everything goes. The NFS client lost all shares. # mount -a Starting rpc.statd ... portmapper not running failed mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified Starting rpc.statd ... portmapper not running failed mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified Starting rpc.statd ... portmapper not running failed mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified And tens of other stuff. I guess I'll revert to snapshot before systemd and wait for a while. This is interesting and I'm interested in testing but I guess I would need to know what exctly is meant to be achieved with systemd. It must be good, why otherwise would it be pushed, but my skills ran out, sorry for that. Vahis -- http://waxborg.servepics.com openSUSE 11.2 (x86_64) 2.6.31.14-0.8-default "Evergreen" main host openSUSE 12.1 Milestone 5 (x86_64) 3.0.0-4-desktop in VirtualBox openSUSE 11.4 (i586) 3.0.4-43-desktop "Tumbleweed" in EeePC 900 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-factory+help@opensuse.org