13.1 Hi I'm trying to trace the source of an availability or otherwise problem of the automounter. If the file server was available when autofs started at boot, there is no problem. However, if it was not and e.g. a user requests her home directory, nothing is mounted until autofs is restarted. automount shows mounts in the table but the mount points do not appear until after a restart. Even though there are many variables here such as differing versions of autofs, sssd and cifs-utils, I'm starting with openSUSE because this works OK with an Ubuntu client in the same domain. Any quick fixes? Thanks -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none. Gruß Jan -- Always remember that strength is obtained by meeting resistance. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Hi We set it to debug. 1. The ws is booted without the fs 2. The fs is made available 3. a domain user with a ticket requests a folder: 2014-06-09T18:03:34.819140+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:34.849840+02:00 catral automount[1772]: handle_packet_missing_indirect: token 1, name julie, request pid 1523 2014-06-09T18:03:34.853631+02:00 catral automount[1772]: attempting to mount entry /home/users/julie 2014-06-09T18:03:34.870186+02:00 catral automount[1772]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:03:34.880258+02:00 catral automount[1772]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:03:34.884806+02:00 catral automount[1772]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:03:34.893502+02:00 catral automount[1772]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.909155+02:00 catral automount[1772]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:03:34.920933+02:00 catral automount[1772]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:03:34.930533+02:00 catral automount[1772]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.946003+02:00 catral automount[1772]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:03:34.956471+02:00 catral automount[1772]: mount_mount: mount(generic): calling mkdir_path /home/users/julie 2014-06-09T18:03:34.983125+02:00 catral automount[1772]: mount_mount: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie 2014-06-09T18:03:35.085167+02:00 catral automount[1772]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:03:35.514859+02:00 catral kernel: [ 491.419621] FS-Cache: Loaded 2014-06-09T18:03:35.697266+02:00 catral kernel: [ 491.614892] FS-Cache: Netfs 'cifs' registered for caching 2014-06-09T18:03:35.700700+02:00 catral kernel: [ 491.614978] Key type cifs.spnego registered 2014-06-09T18:03:35.707239+02:00 catral kernel: [ 491.615018] Key type cifs.idmap registered 2014-06-09T18:03:41.785160+02:00 catral kernel: [ 497.706906] CIFS VFS: Error connecting to socket. Aborting operation. 2014-06-09T18:03:41.798063+02:00 catral kernel: [ 497.718410] CIFS VFS: cifs_mount failed w/return code = -113 2014-06-09T18:03:41.801592+02:00 catral automount[1772]: >> Unable to find suitable address. 2014-06-09T18:03:41.828414+02:00 catral automount[1772]: mount(generic): failed to mount //altea/users/julie (type cifs) on /home/users/julie 2014-06-09T18:03:41.851057+02:00 catral automount[1772]: dev_ioctl_send_fail: token = 1 2014-06-09T18:03:41.861845+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:41.883512+02:00 catral automount[1772]: handle_packet_missing_indirect: token 2, name julie, request pid 1523 2014-06-09T18:03:41.892842+02:00 catral automount[1772]: dev_ioctl_send_fail: token = 2 2014-06-09T18:03:41.909169+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:41.916732+02:00 catral automount[1772]: handle_packet_missing_indirect: token 3, name julie, request pid 1523 2014-06-09T18:03:41.940651+02:00 catral automount[1772]: dev_ioctl_send_fail: token = 3 2014-06-09T18:03:41.950049+02:00 catral automount[1772]: failed to mount /home/users/julie 4. Fails 5. We now restart the automounter: 2014-06-09T18:12:32.274195+02:00 catral systemd[1]: Starting Automounts filesystems on demand... 2014-06-09T18:12:32.564846+02:00 catral automount[1829]: Starting automounter version 5.0.8, master map auto.master 2014-06-09T18:12:32.605918+02:00 catral automount[1829]: using kernel protocol version 5.02 2014-06-09T18:12:32.637025+02:00 catral automount[1829]: lookup_nss_read_master: reading master sss auto.master 2014-06-09T18:12:32.657965+02:00 catral automount[1829]: parse_init: parse(sun): init gathered global options: (null) 2014-06-09T18:12:32.694258+02:00 catral automount[1829]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:12:32.823526+02:00 catral automount[1829]: spawn_umount: mtab link detected, passing -n to mount 2014-06-09T18:12:33.241865+02:00 catral automount[1829]: master_do_mount: mounting /home/shared 2014-06-09T18:12:33.255495+02:00 catral automount[1829]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-shared 2014-06-09T18:12:33.275211+02:00 catral automount[1829]: lookup_nss_read_map: reading map sss auto.shared 2014-06-09T18:12:33.283432+02:00 catral automount[1829]: parse_init: parse(sun): init gathered global options: (null) 2014-06-09T18:12:33.290128+02:00 catral automount[1829]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:12:33.374951+02:00 catral automount[1829]: spawn_umount: mtab link detected, passing -n to mount 2014-06-09T18:12:33.856529+02:00 catral automount[1829]: mounted indirect on /home/shared with timeout 600, freq 150 seconds 2014-06-09T18:12:33.881464+02:00 catral automount[1829]: st_ready: st_ready(): state = 0 path /home/shared 2014-06-09T18:12:33.913752+02:00 catral automount[1829]: ghosting enabled 2014-06-09T18:12:33.976869+02:00 catral automount[1829]: master_do_mount: mounting /home/users 2014-06-09T18:12:34.017636+02:00 catral automount[1829]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-users 2014-06-09T18:12:34.042005+02:00 catral automount[1829]: lookup_nss_read_map: reading map sss auto.users 2014-06-09T18:12:34.070179+02:00 catral automount[1829]: parse_init: parse(sun): init gathered global options: (null) 2014-06-09T18:12:34.121263+02:00 catral automount[1829]: remount_active_mount: trying to re-connect to mount /home/users 2014-06-09T18:12:34.147114+02:00 catral automount[1829]: mounted indirect on /home/users with timeout 600, freq 150 seconds 2014-06-09T18:12:34.157983+02:00 catral automount[1829]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:12:34.167889+02:00 catral automount[1829]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:12:34.177015+02:00 catral automount[1829]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:12:34.190996+02:00 catral automount[1829]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:12:34.212063+02:00 catral automount[1829]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:12:34.256119+02:00 catral automount[1829]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:12:34.261690+02:00 catral automount[1829]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:12:34.287540+02:00 catral automount[1829]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:12:34.387673+02:00 catral automount[1829]: re-connected to /home/users/julie 2014-06-09T18:12:34.443337+02:00 catral automount[1829]: remount_active_mount: re-connected to mount /home/users 2014-06-09T18:12:34.477104+02:00 catral automount[1829]: st_ready: st_ready(): state = 0 path /home/users 2014-06-09T18:12:34.504261+02:00 catral automount[1829]: ghosting enabled 2014-06-09T18:12:34.509566+02:00 catral automount[1829]: master_do_mount: mounting /home/profiles 2014-06-09T18:12:34.521711+02:00 catral automount[1829]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-profiles 2014-06-09T18:12:34.532875+02:00 catral automount[1829]: lookup_nss_read_map: reading map sss auto.profiles 2014-06-09T18:12:34.541893+02:00 catral automount[1829]: parse_init: parse(sun): init gathered global options: (null) 2014-06-09T18:12:34.553950+02:00 catral automount[1829]: mounted indirect on /home/profiles with timeout 600, freq 150 seconds 2014-06-09T18:12:34.565262+02:00 catral automount[1829]: st_ready: st_ready(): state = 0 path /home/profiles 2014-06-09T18:12:34.573840+02:00 catral automount[1829]: ghosting enabled 2014-06-09T18:12:34.595966+02:00 catral sudo: pam_unix(sudo:session): session closed for user root 2014-06-09T18:12:34.605672+02:00 catral systemd[1]: Started Automounts filesystems on demand. 6. ...and try again: 2014-06-09T18:15:43.953805+02:00 catral automount[1829]: handle_packet: type = 3 2014-06-09T18:15:43.981924+02:00 catral automount[1829]: handle_packet_missing_indirect: token 5, name julie, request pid 1523 2014-06-09T18:15:43.985989+02:00 catral automount[1829]: attempting to mount entry /home/users/julie 2014-06-09T18:15:44.006945+02:00 catral automount[1829]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:15:44.026613+02:00 catral automount[1829]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:15:44.034122+02:00 catral automount[1829]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:15:44.038223+02:00 catral automount[1829]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:15:44.061792+02:00 catral automount[1829]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:15:44.075713+02:00 catral automount[1829]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:15:44.084962+02:00 catral automount[1829]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:15:44.097172+02:00 catral automount[1829]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:15:44.119526+02:00 catral automount[1829]: mount_mount: mount(generic): calling mkdir_path /home/users/julie 2014-06-09T18:15:44.128935+02:00 catral automount[1829]: mount_mount: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie 2014-06-09T18:15:44.137023+02:00 catral automount[1829]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:15:44.292902+02:00 catral cifs.upcall: key description: cifs.spnego;0;0;39010000;ver=0x2;host=altea;ip4=192.168.1.100;sec=krb5;uid=0x0;creduid=0x0;user=cifsuser;pid=0x75d 2014-06-09T18:15:44.304104+02:00 catral cifs.upcall: ver=2 2014-06-09T18:15:44.321263+02:00 catral cifs.upcall: host=altea 2014-06-09T18:15:44.342803+02:00 catral cifs.upcall: ip=192.168.1.100 2014-06-09T18:15:44.350840+02:00 catral cifs.upcall: sec=1 2014-06-09T18:15:44.358866+02:00 catral cifs.upcall: uid=0 2014-06-09T18:15:44.389853+02:00 catral cifs.upcall: creduid=0 2014-06-09T18:15:44.397484+02:00 catral cifs.upcall: user=cifsuser 2014-06-09T18:15:44.407866+02:00 catral cifs.upcall: pid=1885 2014-06-09T18:15:44.415211+02:00 catral cifs.upcall: find_krb5_cc: considering /tmp/krb5cc_0 2014-06-09T18:15:44.417827+02:00 catral cifs.upcall: find_krb5_cc: FILE:/tmp/krb5cc_0 is valid ccache 2014-06-09T18:15:44.427131+02:00 catral cifs.upcall: find_krb5_cc: considering /tmp/krb5cc_3000070 2014-06-09T18:15:44.429753+02:00 catral cifs.upcall: find_krb5_cc: /tmp/krb5cc_3000070 is owned by 3000070, not 0 2014-06-09T18:15:44.432801+02:00 catral cifs.upcall: find_krb5_cc: considering /tmp/krb5cc_3000021 2014-06-09T18:15:44.441177+02:00 catral cifs.upcall: find_krb5_cc: /tmp/krb5cc_3000021 is owned by 3000021, not 0 2014-06-09T18:15:44.453870+02:00 catral cifs.upcall: handle_krb5_mech: getting service ticket for altea 2014-06-09T18:15:44.467189+02:00 catral cifs.upcall: handle_krb5_mech: obtained service ticket 2014-06-09T18:15:44.473921+02:00 catral cifs.upcall: Exit status 0 2014-06-09T18:15:44.652214+02:00 catral automount[1829]: mount_mount: mount(generic): mounted //altea/users/julie type cifs on /home/users/julie 2014-06-09T18:15:44.694166+02:00 catral automount[1829]: dev_ioctl_send_ready: token = 5 2014-06-09T18:15:44.712113+02:00 catral automount[1829]: mounted /home/users/julie 2014-06-09T18:15:53.161985+02:00 catral automount[1829]: st_expire: state 1 path /home/users 2014-06-09T18:15:53.184064+02:00 catral automount[1829]: expire_proc: exp_proc = 3041913664 path /home/users 2014-06-09T18:15:53.210996+02:00 catral automount[1829]: expire_proc_indirect: expire /home/users/julie 2014-06-09T18:15:53.224331+02:00 catral automount[1829]: 1 remaining in /home/users 2014-06-09T18:15:53.235450+02:00 catral automount[1829]: expire_cleanup: got thid 3041913664 path /home/users stat 3 2014-06-09T18:15:53.244439+02:00 catral automount[1829]: expire_cleanup: sigchld: exp 3041913664 finished, switching from 2 to 1 2014-06-09T18:15:53.257505+02:00 catral automount[1829]: st_ready: st_ready(): state = 2 path /home/users 2014-06-09T18:16:04.170598+02:00 catral automount[1829]: st_expire: state 1 path /home/profiles 2014-06-09T18:16:04.200921+02:00 catral automount[1829]: expire_proc: exp_proc = 3041913664 path /home/profiles 2014-06-09T18:16:04.204545+02:00 catral automount[1829]: expire_cleanup: got thid 3041913664 path /home/profiles stat 0 2014-06-09T18:16:04.217279+02:00 catral automount[1829]: expire_cleanup: sigchld: exp 3041913664 finished, switching from 2 to 1 2014-06-09T18:16:04.224971+02:00 catral automount[1829]: st_ready: st_ready(): state = 2 path /home/profiles 2014-06-09T18:16:06.173240+02:00 catral automount[1829]: st_expire: state 1 path /home/shared 2014-06-09T18:16:06.181051+02:00 catral automount[1829]: expire_proc: exp_proc = 3041913664 path /home/shared 2014-06-09T18:16:06.196384+02:00 catral automount[1829]: expire_cleanup: got thid 3041913664 path /home/shared stat 0 2014-06-09T18:16:06.200394+02:00 catral automount[1829]: expire_cleanup: sigchld: exp 3041913664 finished, switching from 2 to 1 2014-06-09T18:16:06.204799+02:00 catral automount[1829]: st_ready: st_ready(): state = 2 path /home/shared 7. Perfect. - - - This is a real pain, especially as a lot of the clients are in different rooms. Thanks, L x automount -m automount -m lookup_nss_read_master: reading master sss auto.master parse_init: parse(sun): init gathered global options: (null) spawn_mount: mtab link detected, passing -n to mount spawn_umount: mtab link detected, passing -n to mount autofs dump map information =========================== global options: none configured Mount point: /home/shared source(s): lookup_nss_read_map: reading map sss auto.shared parse_init: parse(sun): init gathered global options: (null) spawn_mount: mtab link detected, passing -n to mount spawn_umount: mtab link detected, passing -n to mount instance type(s): sss map: auto.shared * | -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/shared/& Mount point: /home/users source(s): lookup_nss_read_map: reading map sss auto.users parse_init: parse(sun): init gathered global options: (null) spawn_mount: mtab link detected, passing -n to mount spawn_umount: mtab link detected, passing -n to mount instance type(s): sss map: auto.users * | -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& Mount point: /home/profiles source(s): lookup_nss_read_map: reading map sss auto.profiles parse_init: parse(sun): init gathered global options: (null) spawn_mount: mtab link detected, passing -n to mount spawn_umount: mtab link detected, passing -n to mount instance type(s): sss map: auto.profiles * | -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/profiles/& sssd.conf [sssd] services = nss, pam, autofs config_file_version = 2 domains = hh3.site [nss] [pam] [autofs] [domain/hh3.site] ldap_schema = ad id_provider = ad access_provider = ad auth_provider = ad chpass_provider = ad ldap_id_mapping=false ldap_sasl_mech = gssapi ldap_sasl_authid = CATRAL$@HH3.SITE krb5_keytab = /etc/krb5.keytab ldap_krb5_init_creds = true autofs_provider=ldap ldap_autofs_search_base = OU=automount,DC=hh3,DC=site ldap_autofs_map_object_class = automountMap ldap_autofs_entry_object_class = automount ldap_autofs_map_name = automountMapName ldap_autofs_entry_key = automountKey ldap_autofs_entry_value = automountInformation -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of: systemd-analyze plot > lynn.svg xz -9 lynn.svg and attach the resulting .xz file here. -- Cristian "I don't know the key to success, but the key to failure is trying to please everybody." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain: systemd-analyse: command not found -- Per Jessen, Zürich (31.4°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2014-06-10 at 16:10 +0200, Per Jessen wrote:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
On Tue, 2014-06-10 at 16:58 +0200, lynn wrote:
On Tue, 2014-06-10 at 16:10 +0200, Per Jessen wrote:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn: > (...). > Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
Any thoughts on the plot? Is there an equivalent for Ubuntu so we could compare and perhaps see what's going wrong? L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 11/06/14 18:31, lynn escribió:
On Tue, 2014-06-10 at 16:58 +0200, lynn wrote:
On Tue, 2014-06-10 at 16:10 +0200, Per Jessen wrote:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote: > Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn: >> (...). >> Any quick fixes? > > Any log messages? You can adjust the log level in > /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
I got no second plot from your emails.. -- Cristian "I don't know the key to success, but the key to failure is trying to please everybody." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 2014-06-11 at 18:55 -0400, Cristian Rodríguez wrote:
El 11/06/14 18:31, lynn escribió:
On Tue, 2014-06-10 at 16:58 +0200, lynn wrote:
On Tue, 2014-06-10 at 16:10 +0200, Per Jessen wrote:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió: > On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote: >> Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn: >>> (...). >>> Any quick fixes? >> >> Any log messages? You can adjust the log level in >> /etc/sysconfig/autofs, default is none. >
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
I got no second plot from your emails..
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot. Thanks for getting back. L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Jun 12, 2014 at 1:32 PM, lynn
Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
I got no second plot from your emails..
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I do not see any second e-mail with attachment as well. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Andrey Borzenkov wrote:
On Thu, Jun 12, 2014 at 1:32 PM, lynn
wrote: Oh dear. I'd been attending a course on windows gpos all day so that may explain it. Sorry. Here's another go.
I got no second plot from your emails..
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I do not see any second e-mail with attachment as well.
It was in this email, but our primitive archive doesn't seem to work with attachments: http://lists.opensuse.org/opensuse/2014-06/msg00591.html -- Per Jessen, Zürich (22.8°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 12/06/14 05:32, lynn escribió:
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I know how to look for attachments, however there is no second plot in your emails. if it was really included, it was stripped out by the mailing list software. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Cristian Rodríguez wrote:
El 12/06/14 05:32, lynn escribió:
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I know how to look for attachments, however there is no second plot in your emails. if it was really included, it was stripped out by the mailing list software.
I got Lynn's attachment via the list. I saved it here: http://files.jessen.ch/l.tar.gz -- Per Jessen, Zürich (22.8°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 12/06/14 14:02, Per Jessen escribió:
Cristian Rodríguez wrote:
El 12/06/14 05:32, lynn escribió:
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I know how to look for attachments, however there is no second plot in your emails. if it was really included, it was stripped out by the mailing list software.
I got Lynn's attachment via the list.
I saved it here:
Ok, You will have to fill a bug report against the autofs package, It is not a problem with systemd. -- Cristian "I don't know the key to success, but the key to failure is trying to please everybody." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, 2014-06-12 at 10:58 -0400, Cristian Rodríguez wrote:
El 12/06/14 05:32, lynn escribió:
Look for attachments. e.g. in Evolution, posts with a paper clip icon have attachments. You will find an attachment called l.tar.gz. In it is a plot.
I know how to look for attachments, however there is no second plot in your emails. if it was really included, it was stripped out by the mailing list software.
Mmm. It arrived here OK. Let's me try again. If that fails, I'll dropbox it or something.
lynn wrote:
Any thoughts on the plot? Is there an equivalent for Ubuntu so we could compare and perhaps see what's going wrong? L x
---- I am a bit perplexed as to how a plot of systemd's boot time might help debug a problem in autofs? I see no detail on the plot that would give insight into anything inside 'autofs', that would be useful in debugging this problem. I.e. it looks like the desire for a systemd bootgraph resulted in approximately a 4 day waste of time as the posted graph contains none of the detail necessary to debug this problem. Looking at the man page for autofs, it seems it caches things at it's master file level while handling changes below that level. It might be more instructive (besides looking at the manpage) to look at autofs's config files. If the directives to access home dirs are in the master file there's a fair chance the info that you need to change is cached and not reloaded until the daemon is restarted. Are the autofs files the same between the two clients (suse+ubuntu)? how about the autofs versions? Another thing to look at might be build options -- i.e. if the config files are the same, then something in the suse config is caching the fact that the host you want is missing in the suse version, but not in the ubuntu version. Looking at the source and build options to see which ones might affect caching might be another step (all other things being the same, of which the systemd-boot graph says nothing). Is ubuntu running systemd? It's possible queries from autofs are hitting some cache on the suse system that doesn't exist on the ubuntu system ?? Just some ideas... now that the pretty pictures are out of the way...;-) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, 2014-06-12 at 15:00 -0700, Linda Walsh wrote:
lynn wrote:
Any thoughts on the plot? Is there an equivalent for Ubuntu so we could compare and perhaps see what's going wrong? L x
---- I am a bit perplexed as to how a plot of systemd's boot time might help debug a problem in autofs? I see no detail on the plot that would give insight into anything inside 'autofs', that would be useful in debugging this problem.
I.e. it looks like the desire for a systemd bootgraph resulted in approximately a 4 day waste of time as the posted graph contains none of the detail necessary to debug this problem.
Looking at the man page for autofs, it seems it caches things at it's master file level while handling changes below that level.
It might be more instructive (besides looking at the manpage) to look at autofs's config files. If the directives to access home dirs are in the master file there's a fair chance the info that you need to change is cached and not reloaded until the daemon is restarted.
Are the autofs files the same between the two clients (suse+ubuntu)?
how about the autofs versions? Another thing to look at might be build options -- i.e. if the config files are the same, then something in the suse config is caching the fact that the host you want is missing in the suse version, but not in the ubuntu version.
Looking at the source and build options to see which ones might affect caching might be another step (all other things being the same, of which the systemd-boot graph says nothing). Is ubuntu running systemd?
It's possible queries from autofs are hitting some cache on the suse system that doesn't exist on the ubuntu system ??
Just some ideas... now that the pretty pictures are out of the way...;-)
Hi Thanks for reopening this for us and giving some new ideas. Although, you gotta admit, those systemd pics _were_ pretty. I mean, full colour too. Fully scalable svg. No command line or log file nonsense for these guys;) Could I offer a little more detail? /etc/default/autofs and /etc/sysconfig/autofs are identical but not used. The autofs maps are in stored in AD and are extracted by sssd. autofs packages: openSUSE 13.1 v5.0.8 Ubuntu 14.04 v5.0.7 The one difference we do see which is relevant in our case is the Ubuntu changelog: autofs (5.0.7-3ubuntu3) trusty; urgency=medium * fix-ssslib-search.patch: Don't skip sssd autofs search if presseded. * rules: set ssslibdir correctly when calling configure (LP: #1081489) Which points to problems they had against sssd, but which, as we've seen is now fixed. Did 13.1 get these fixes too? Dare I post any other details? smb.conf 1. DC [global] workgroup = HH3 realm = HH3.SITE netbios name = HH16 server role = active directory domain controller server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate idmap_ldb:use rfc2307 = yes [netlogon] path = /usr/local/samba/var/locks/sysvol/hh3.site/scripts read only = No [sysvol] path = /usr/local/samba/var/locks/sysvol read only = No 2. fs [global] workgroup = HH3 realm = HH3.SITE security = ADS kerberos method = system keytab username map = /home/xxx/smbmap #log level = 10 [users] path = /home/users read only = No [profiles] path = /home/profiles read only = No [shared] path = /home/shared read only = No Underlying ext4 with all acls set at file system level. 3. ws (both 13.1 and 14.04) [global] workgroup = HH3 realm = HH3.SITE security = ADS kerberos method = secrets and keytab /etc/krb5.conf (on all boxes) [libdefaults] default_realm = HH3.SITE dns_lookup_realm = false dns_lookup_kdc = true default_ccache_name = /tmp/krb5cc_%{uid} [realms] HH3.SITE = { kdc = hh16.hh3.site:88 } /etc/sssd/sssd.conf (on ws) [sssd] services = nss, pam, autofs config_file_version = 2 domains = default [nss] [pam] [autofs] [domain/default] ad_hostname = xxx.hh3.site ad_server = hh16.hh3.site ad_domain = hh3.site ldap_schema = ad id_provider = ad access_provider = ad auth_provider = ad chpass_provider = ad ldap_id_mapping=false ldap_sasl_mech = gssapi ldap_sasl_authid = xxx$@HH3.SITE krb5_keytab = /etc/krb5.keytab ldap_krb5_init_creds = true autofs_provider=ldap autofs_search_base = OU=automount,DC=hh3,DC=site ldap_autofs_map_object_class = automountMap ldap_autofs_entry_object_class = automount ldap_autofs_map_name = automountMapName ldap_autofs_entry_key = automountKey ldap_autofs_entry_value = automountInformation The cifs calls are the same on both distros so I think we can rule out cifs-utils. DNS: host returns correctly everywhere for both the forward and reverse zones. K duerman bien, L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
lynn wrote:
On Thu, 2014-06-12 at 15:00 -0700, Linda Walsh wrote:
Are the autofs files the same between the two clients (suse+ubuntu)?
---- The clients are the same you said? Have you looked at how they are built (i.e. suse src rpm and similar for ubuntu -- see if they used same options) does autofs use a cache file anywhere?
how about the autofs versions? Another thing to look at might be build options -- i.e. if the config files are the same, then something in the suse config is caching the fact that the host you want is missing in the suse version, but not in the ubuntu version.
Looking at the source and build options to see which ones might affect caching might be another step (all other things being the same, of which the systemd-boot graph says nothing). Is ubuntu running systemd?
^^^^^^^^^^^^^^^^^^^^^^^^^ systemd? ubuntu?
It's possible queries from autofs are hitting some cache on the suse system that doesn't exist on the ubuntu system ??
Just some ideas... now that the pretty pictures are out of the way...;-)
---- I think I could get a similar graph to the one you posted, I agree, it was pretty, but I'd have to do a bit more work. As it is, I only could easily find the kernel time (attached)-- which ends up being about 1/3 as long using sysV and direct boot vs. systemd. I also read that for speed, some kernel devs suggest chucking udev -- apparently it is pretty slow compared to some alternatives (uboot+mdev). I seem to remember ubuntu using some of those alternate boot methods maybe between caching differences and maybe a large variation in boot order and speed, ubuntu isn't hitting the same problem areas. To compare the two -- you might turn autofs "off", then, manually start it on both systems and see if you see the same behaviors. At least then we could rule out boot speed and maybe order, as well. FWIW, I am still at samba 3.x, so most of your config is lost on me...(sigh)... sorry.
On Thu, 2014-06-12 at 20:00 -0700, Linda Walsh wrote:
lynn wrote:
On Thu, 2014-06-12 at 15:00 -0700, Linda Walsh wrote:
Are the autofs files the same between the two clients (suse+ubuntu)?
---- The clients are the same you said? Have you looked at how they are built (i.e. suse src rpm and similar for ubuntu -- see if they used same options)
does autofs use a cache file anywhere?
Mmm. I wonder. Not autofs as such, but both sssd and kerberos have their own cache. One problem in 13.1 is that systemd puts the root ticket under /var/run/user/0, whereas Kerberos needs it in /tmp. Of course, if root has not logged in, that directory will not exist. root will never have logged in on a client machine anyway so this is one of our systemd annoyances. The workaround is the krb5.conf file we quoted. The other cache are the ldbs maintained by sssd under /var/log/sss/db. The machine ticket cache for the automounter is also stored there but as sssd does it's own kerberos implementation, it's not necessary for that ticket to be in /tmp. Is it? Those work like nscd for getent. Forget to clear them, change something and nothing happens. Something else that narrows this down is that with the maps and nss as files (such as /etc/auto.master) the shares get mounted even if the file server was down at boot. This is why the suggestion to bugzilla against autofs seems wrong. But that doesn't get us away from an Ubuntu ws working out of the box against the same DC and fs. Sorry folks for rambling on. Sometimes thinking out loud will bring out the obvious I normally always always overlook. The m$ mob are pointing and clicking on the test domain ATM so everyone else is not welcome.
how about the autofs versions? Another thing to look at might be build options -- i.e. if the config files are the same, then something in the suse config is caching the fact that the host you want is missing in the suse version, but not in the ubuntu version.
Looking at the source and build options to see which ones might affect caching might be another step (all other things being the same, of which the systemd-boot graph says nothing). Is ubuntu running systemd?
^^^^^^^^^^^^^^^^^^^^^^^^^ systemd? ubuntu?
It's possible queries from autofs are hitting some cache on the suse system that doesn't exist on the ubuntu system ??
Just some ideas... now that the pretty pictures are out of the way...;-)
---- I think I could get a similar graph to the one you posted, I agree, it was pretty, but I'd have to do a bit more work.
As it is, I only could easily find the kernel time (attached)-- which ends up being about 1/3 as long using sysV and direct boot vs. systemd. I also read that for speed, some kernel devs suggest chucking udev -- apparently it is pretty slow compared to some alternatives (uboot+mdev).
I seem to remember ubuntu using some of those alternate boot methods maybe between caching differences and maybe a large variation in boot order and speed, ubuntu isn't hitting the same problem areas.
To compare the two -- you might turn autofs "off", then, manually start it on both systems and see if you see the same behaviors.
At least then we could rule out boot speed and maybe order, as well.
FWIW, I am still at samba 3.x, so most of your config is lost on me...(sigh)... sorry. The file server is exactly the same. The only (big) difference is active directory on the DC. Oh, and you don't have to put all that create mode 777 or whatever smb.conf nonsense any more.
Thanks for your input, L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
lynn wrote:
The file server is exactly the same. The only (big) difference is active directory on the DC. Oh, and you don't have to put all that create mode 777 or whatever smb.conf nonsense any more.
Thanks for your input, L x
A possible 'kludge' -- you get specific error messages in the log when autofs can't find a home dir to mount. Assuming this doesn't happen *that* often, could something monitor autofs's log for those messages, and restart it if they occurred? It'd be a last-resort kludge, but might deal with the problem until the underlying problem was found... ??? Just an idea.... Might have to tune the sensitivity to only restart after getting some minimum number of messages-- dunno, but it would be a kludge.. (*sigh*)... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 2014-06-13 at 10:18 -0700, Linda Walsh wrote:
lynn wrote:
The file server is exactly the same. The only (big) difference is active directory on the DC. Oh, and you don't have to put all that create mode 777 or whatever smb.conf nonsense any more.
Thanks for your input, L x
A possible 'kludge' -- you get specific error messages in the log when autofs can't find a home dir to mount. Assuming this doesn't happen *that* often, could something monitor autofs's log for those messages, and restart it if they occurred?
It'd be a last-resort kludge, but might deal with the problem until the underlying problem was found... ??? Just an idea....
Might have to tune the sensitivity to only restart after getting some minimum number of messages-- dunno, but it would be a kludge.. (*sigh*)...
Hi Linda Yes, I've just tested restarting autofs in after.local but no joy. It seems that it doesn't work now in the same way as it used to do and run directly after the runlevel is reached. Indeed, the pretty graph is useful as it confirms that! Your idea of the error messages is good. It logs to /var/log/messages so just thinking how to do that. Maybe grep it to find user login and a failed mount. But how do we know when to grep? Anyway. . . L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
lynn wrote:
On Fri, 2014-06-13 at 10:18 -0700, Linda Walsh wrote:
lynn wrote:
The file server is exactly the same. The only (big) difference is active directory on the DC. Oh, and you don't have to put all that create mode 777 or whatever smb.conf nonsense any more.
Thanks for your input, L x
A possible 'kludge' -- you get specific error messages in the log when autofs can't find a home dir to mount. Assuming this doesn't happen *that* often, could something monitor autofs's log for those messages, and restart it if they occurred?
It'd be a last-resort kludge, but might deal with the problem until the underlying problem was found... ??? Just an idea....
Might have to tune the sensitivity to only restart after getting some minimum number of messages-- dunno, but it would be a kludge.. (*sigh*)...
Hi Linda Yes, I've just tested restarting autofs in after.local but no joy. It seems that it doesn't work now in the same way as it used to do and run directly after the runlevel is reached. Indeed, the pretty graph is useful as it confirms that! Your idea of the error messages is good. It logs to /var/log/messages so just thinking how to do that. Maybe grep it to find user login and a failed mount. But how do we know when to grep? Anyway. . . L x
---- Ideally, you'd have the log daemon filter messages from 'autofs' into it's own separate log. That way, when that log file 'grows', you could have something take action. If you have famd installed, it can notify you when something has changed with the file. Like installing the perl module to monitor things was simple enough for me to use for an automake on any file changing recently... it installs a file named "monitor", ... here's a stripped down version so you see how simple it is (if you install the package, monitor will be in /usr/bin). FWIW, to install the package, if you have cpan installed, then this should work:
cpan -i SGI::FAM
#!/usr/bin/perl use SGI::FAM; use P; my $fam=new SGI::FAM; my @paths; foreach (@ARGV) { unless($_) { P "Need path(s) to monitor"; exit (0); } if (-e $_) { $fam->monitor($_); push @paths, $_; } } unless(@paths) { P "No paths to monitor. Exiting."; exit (1); } # over half the prog was arg checking! ^^ while (1) { do { my $ev=$fam->next_event; my @event = ($ev->type, undef, $ev->filename); P "%15s: %s", $ev->type, $ev->filename; } while $fam->pending; } So this can monitor a directory or a file. Writes out what happens: change: xosview.cc.orig change: xosview.cc change: .xosview.cc.swp create: xosview.o change: .xosview.cc.swp change: xosview.o delete: xosview.o delete: xosview.o create: xosview.o change: xosview.o delete: xosview.o delete: xosview.o create: xosview.o change: xosview.o delete: xosview.o --- Each time you see a change in the log, check it's size, and if larger, read the new stuff to find indications of an error. If so, then restart or maybe even 'reload' would work...(dunno if reload is supported). you'd basically launch the monitor at system startup, which would sit around monitoring the autofs logfile... Like I said, it's a kludge, BUT, with many pieces of software today, kludges get you what you want most quickly and with least fuss. Expecting to find the problem or for it to be fixed in any reasonable timeframe is a lost cause in my experience...
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2014-06-16 at 00:23 -0700, Linda Walsh wrote:
lynn wrote:
On Fri, 2014-06-13 at 10:18 -0700, Linda Walsh wrote:
lynn wrote:
The file server is exactly the same. The only (big) difference is active directory on the DC. Oh, and you don't have to put all that create mode 777 or whatever smb.conf nonsense any more.
Thanks for your input, L x
A possible 'kludge' -- you get specific error messages in the log when autofs can't find a home dir to mount. Assuming this doesn't happen *that* often, could something monitor autofs's log for those messages, and restart it if they occurred?
It'd be a last-resort kludge, but might deal with the problem until the underlying problem was found... ??? Just an idea....
Might have to tune the sensitivity to only restart after getting some minimum number of messages-- dunno, but it would be a kludge.. (*sigh*)...
Hi Linda Yes, I've just tested restarting autofs in after.local but no joy. It seems that it doesn't work now in the same way as it used to do and run directly after the runlevel is reached. Indeed, the pretty graph is useful as it confirms that! Your idea of the error messages is good. It logs to /var/log/messages so just thinking how to do that. Maybe grep it to find user login and a failed mount. But how do we know when to grep? Anyway. . . L x
---- Ideally, you'd have the log daemon filter messages from 'autofs' into it's own separate log. That way, when that log file 'grows', you could have something take action.
If you have famd installed, it can notify you when something has changed with the file.
Like installing the perl module to monitor things was simple enough for me to use for an automake on any file changing recently... it installs a file named "monitor", ... here's a stripped down version so you see how simple it is (if you install the package, monitor will be in /usr/bin).
FWIW, to install the package, if you have cpan installed,
then this should work:
cpan -i SGI::FAM
#!/usr/bin/perl
use SGI::FAM; use P; my $fam=new SGI::FAM;
my @paths; foreach (@ARGV) { unless($_) { P "Need path(s) to monitor"; exit (0); } if (-e $_) { $fam->monitor($_); push @paths, $_; } }
unless(@paths) { P "No paths to monitor. Exiting."; exit (1); } # over half the prog was arg checking! ^^ while (1) { do { my $ev=$fam->next_event; my @event = ($ev->type, undef, $ev->filename); P "%15s: %s", $ev->type, $ev->filename; } while $fam->pending; }
So this can monitor a directory or a file. Writes out what happens:
change: xosview.cc.orig change: xosview.cc change: .xosview.cc.swp create: xosview.o change: .xosview.cc.swp change: xosview.o delete: xosview.o delete: xosview.o create: xosview.o change: xosview.o delete: xosview.o delete: xosview.o create: xosview.o change: xosview.o delete: xosview.o --- Each time you see a change in the log, check it's size, and if larger, read the new stuff to find indications of an error.
If so, then restart or maybe even 'reload' would work...(dunno if reload is supported).
you'd basically launch the monitor at system startup, which would sit around monitoring the autofs logfile...
Like I said, it's a kludge, BUT, with many pieces of software today, kludges get you what you want most quickly and with least fuss.
Expecting to find the problem or for it to be fixed in any reasonable timeframe is a lost cause in my experience...
You are right. It is unlikely we will get this solved, and a bugzilla against autofs with only a systemd graph to go on I think will get us nowhere. We currently have a cron to restart autofs. Your monitor solution looks a thousand times more elegant. Thanks for your time. L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 10/06/14 10:10, Per Jessen escribió:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
It is with a "z" .. -- Cristian "I don't know the key to success, but the key to failure is trying to please everybody." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2014-06-10 at 13:51 -0400, Cristian Rodríguez wrote:
El 10/06/14 10:10, Per Jessen escribió:
lynn wrote:
On Mon, 2014-06-09 at 12:49 -0400, Cristian Rodríguez wrote:
El 09/06/14 12:30, lynn escribió:
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn: > (...). > Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Ok, please post the result of:
systemd-analyze plot > lynn.svg xz -9 lynn.svg
and attach the resulting .xz file here.
FYI, that file seems to mostly contain:
systemd-analyse: command not found
It is with a "z" ..
'z' used on second attempt, with excuses. L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Mon, 09 Jun 2014 18:30:15 +0200
lynn
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Hi We set it to debug.
1. The ws is booted without the fs 2. The fs is made available 3. a domain user with a ticket requests a folder:
2014-06-09T18:03:34.819140+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:34.849840+02:00 catral automount[1772]: handle_packet_missing_indirect: token 1, name julie, request pid 1523 2014-06-09T18:03:34.853631+02:00 catral automount[1772]: attempting to mount entry /home/users/julie 2014-06-09T18:03:34.870186+02:00 catral automount[1772]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:03:34.880258+02:00 catral automount[1772]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:03:34.884806+02:00 catral automount[1772]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:03:34.893502+02:00 catral automount[1772]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.909155+02:00 catral automount[1772]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:03:34.920933+02:00 catral automount[1772]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:03:34.930533+02:00 catral automount[1772]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.946003+02:00 catral automount[1772]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:03:34.956471+02:00 catral automount[1772]: mount_mount: mount(generic): calling mkdir_path /home/users/julie 2014-06-09T18:03:34.983125+02:00 catral automount[1772]: mount_mount: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie
I wonder - on behalf of which user does it connect? I.e. - it needs Kerberos ticket, right? When this ticket is acquired? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 2014-06-16 at 07:02 +0400, Andrey Borzenkov wrote:
В Mon, 09 Jun 2014 18:30:15 +0200 lynn
пишет: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie ^^^^^^^^^
I wonder - on behalf of which user does it connect? I.e. - it needs Kerberos ticket, right? When this ticket is acquired?
The user is already a ticket holder at the time she makes the request for the cifs/ service. cifsuser is a domain user with just enough privileges to make the mount on behalf of whoever requests it. The upcall obtains the service ticket on behalf of the user at the time of the mount request. This is invisible to the user as the cifsuser key is made available via the keytab. Maybe this is not the best way to do it? Thanks, L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Mon, 16 Jun 2014 13:18:34 +0200
lynn
On Mon, 2014-06-16 at 07:02 +0400, Andrey Borzenkov wrote:
В Mon, 09 Jun 2014 18:30:15 +0200 lynn
пишет: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie ^^^^^^^^^
I wonder - on behalf of which user does it connect? I.e. - it needs Kerberos ticket, right? When this ticket is acquired?
The user is already a ticket holder at the time she makes the request for the cifs/ service. cifsuser is a domain user with just enough privileges to make the mount on behalf of whoever requests it. The upcall obtains the service ticket on behalf of the user at the time of the mount request. This is invisible to the user as the cifsuser key is made available via the keytab.
Maybe this is not the best way to do it?
Oh, I really do not know. I just try to more or less randomly poke around at possible issues. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Mon, 09 Jun 2014 18:30:15 +0200
lynn
On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Hi We set it to debug.
1. The ws is booted without the fs 2. The fs is made available 3. a domain user with a ticket requests a folder:
2014-06-09T18:03:34.819140+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:34.849840+02:00 catral automount[1772]: handle_packet_missing_indirect: token 1, name julie, request pid 1523 2014-06-09T18:03:34.853631+02:00 catral automount[1772]: attempting to mount entry /home/users/julie 2014-06-09T18:03:34.870186+02:00 catral automount[1772]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:03:34.880258+02:00 catral automount[1772]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:03:34.884806+02:00 catral automount[1772]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:03:34.893502+02:00 catral automount[1772]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.909155+02:00 catral automount[1772]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:03:34.920933+02:00 catral automount[1772]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:03:34.930533+02:00 catral automount[1772]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.946003+02:00 catral automount[1772]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:03:34.956471+02:00 catral automount[1772]: mount_mount: mount(generic): calling mkdir_path /home/users/julie 2014-06-09T18:03:34.983125+02:00 catral automount[1772]: mount_mount: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie 2014-06-09T18:03:35.085167+02:00 catral automount[1772]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:03:35.514859+02:00 catral kernel: [ 491.419621] FS-Cache: Loaded 2014-06-09T18:03:35.697266+02:00 catral kernel: [ 491.614892] FS-Cache: Netfs 'cifs' registered for caching 2014-06-09T18:03:35.700700+02:00 catral kernel: [ 491.614978] Key type cifs.spnego registered 2014-06-09T18:03:35.707239+02:00 catral kernel: [ 491.615018] Key type cifs.idmap registered 2014-06-09T18:03:41.785160+02:00 catral kernel: [ 497.706906] CIFS VFS: Error connecting to socket. Aborting operation. 2014-06-09T18:03:41.798063+02:00 catral kernel: [ 497.718410] CIFS VFS: cifs_mount failed w/return code = -113
This message comes from cifs driver when it fails to connect to CIFS server. Error number is EHOSTUNREACH - Host unreachable. IP address which is used during connection is resolved by mount.cifs from UNC path passed as argument, in this case "altea". Obvious test at this point would be to attempt manually mount after boot using the same options, i.e. mount -t cifs -n -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /mnt Additionally, you can try to add the same command to be executed after autofs is launched, i.e. add to [Service] section in /usr/lib/systemd/system/autofs.service (onle line of course): ExecStartPost=-/usr/bin/mount -t cifs -n -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /mnt This may show whether problem is in network configuration at the time mount is executed, in service environment or related to autofs. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Mon, 09 Jun 2014 18:30:15 +0200 lynn
пишет: On Sun, 2014-06-08 at 17:20 +0200, Jan Ritzerfeld wrote:
Am Sonntag, 8. Juni 2014, 11:50:41 schrieb lynn:
(...). Any quick fixes?
Any log messages? You can adjust the log level in /etc/sysconfig/autofs, default is none.
Hi We set it to debug.
1. The ws is booted without the fs 2. The fs is made available 3. a domain user with a ticket requests a folder:
2014-06-09T18:03:34.819140+02:00 catral automount[1772]: handle_packet: type = 3 2014-06-09T18:03:34.849840+02:00 catral automount[1772]: handle_packet_missing_indirect: token 1, name julie, request pid 1523 2014-06-09T18:03:34.853631+02:00 catral automount[1772]: attempting to mount entry /home/users/julie 2014-06-09T18:03:34.870186+02:00 catral automount[1772]: lookup_mount: lookup(sss): looking up julie 2014-06-09T18:03:34.880258+02:00 catral automount[1772]: lookup_mount: lookup(sss): julie -> -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/& 2014-06-09T18:03:34.884806+02:00 catral automount[1772]: parse_mount: parse(sun): expanded entry: -fstype=cifs,sec=krb5,username=cifsuser,multiuser ://altea/users/julie 2014-06-09T18:03:34.893502+02:00 catral automount[1772]: parse_mount: parse(sun): gathered options: fstype=cifs,sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.909155+02:00 catral automount[1772]: parse_mount: parse(sun): dequote("://altea/users/julie") -> ://altea/users/julie 2014-06-09T18:03:34.920933+02:00 catral automount[1772]: parse_mount: parse(sun): core of entry: options=fstype=cifs,sec=krb5,username=cifsuser,multiuser, loc=://altea/users/julie 2014-06-09T18:03:34.930533+02:00 catral automount[1772]: sun_mount: parse(sun): mounting root /home/users, mountpoint julie, what //altea/users/julie, fstype cifs, options sec=krb5,username=cifsuser,multiuser 2014-06-09T18:03:34.946003+02:00 catral automount[1772]: do_mount: //altea/users/julie /home/users/julie type cifs options sec=krb5,username=cifsuser,multiuser using module generic 2014-06-09T18:03:34.956471+02:00 catral automount[1772]: mount_mount: mount(generic): calling mkdir_path /home/users/julie 2014-06-09T18:03:34.983125+02:00 catral automount[1772]: mount_mount: mount(generic): calling mount -t cifs -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /home/users/julie 2014-06-09T18:03:35.085167+02:00 catral automount[1772]: spawn_mount: mtab link detected, passing -n to mount 2014-06-09T18:03:35.514859+02:00 catral kernel: [ 491.419621] FS-Cache: Loaded 2014-06-09T18:03:35.697266+02:00 catral kernel: [ 491.614892] FS-Cache: Netfs 'cifs' registered for caching 2014-06-09T18:03:35.700700+02:00 catral kernel: [ 491.614978] Key type cifs.spnego registered 2014-06-09T18:03:35.707239+02:00 catral kernel: [ 491.615018] Key type cifs.idmap registered 2014-06-09T18:03:41.785160+02:00 catral kernel: [ 497.706906] CIFS VFS: Error connecting to socket. Aborting operation. 2014-06-09T18:03:41.798063+02:00 catral kernel: [ 497.718410] CIFS VFS: cifs_mount failed w/return code = -113
This message comes from cifs driver when it fails to connect to CIFS server. Error number is EHOSTUNREACH - Host unreachable. IP address which is used during connection is resolved by mount.cifs from UNC path passed as argument, in this case "altea".
Obvious test at this point would be to attempt manually mount after boot using the same options, i.e.
mount -t cifs -n -s -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /mnt Hi Yes, this falls into one of two outcomes and depends only upon whether
On Mon, 2014-06-16 at 22:59 +0400, Andrey Borzenkov wrote: the file server is up.
Additionally, you can try to add the same command to be executed after autofs is launched, i.e. add to [Service] section in /usr/lib/systemd/system/autofs.service (onle line of course):
ExecStartPost=-/usr/bin/mount -t cifs -n -o sec=krb5,username=cifsuser,multiuser //altea/users/julie /mnt
This may show whether problem is in network configuration at the time mount is executed, in service environment or related to autofs.
[Unit] Description=Automounts filesystems on demand After=network.target remote-fs.target ypbind.service [Service] Type=forking PIDFile=/var/run/automount.pid EnvironmentFile=-/etc/sysconfig/autofs ExecStart=/usr/sbin/automount ${AUTOFS_OPTIONS} -p /var/run/automount.pid ExecReload=/usr/bin/kill -HUP $MAINPID TimeoutSec=180 ExecStartPost=-/usr/bin/mount -t cifs -n -o sec=krb5,username=cifsuser,multiuser [Install] WantedBy=multi-user.target (The line has wrapped in the -e-mail) boot: no fs available 2014-06-17T13:27:30.301046+02:00 catral systemd[1]: Stopping Automounts filesyst ems on demand... 2014-06-17T13:27:30.317974+02:00 catral systemd[1]: Starting Automounts filesyst ems on demand... 2014-06-17T13:27:30.345979+02:00 catral automount[619]: autofs stopped 2014-06-17T13:27:30.647989+02:00 catral automount[777]: Starting automounter ver sion 5.0.8, master map auto.master 2014-06-17T13:27:30.660586+02:00 catral automount[777]: using kernel protocol ve rsion 5.02 2014-06-17T13:27:30.763753+02:00 catral automount[777]: lookup_nss_read_master: reading master sss auto.master 2014-06-17T13:27:30.769950+02:00 catral automount[777]: parse_init: parse(sun): init gathered global options: (null) 2014-06-17T13:27:30.832108+02:00 catral automount[777]: spawn_mount: mtab link d etected, passing -n to mount 2014-06-17T13:27:30.971278+02:00 catral automount[777]: spawn_umount: mtab link detected, passing -n to mount 2014-06-17T13:27:31.086610+02:00 catral automount[777]: setautomntent: lookup(sss): setautomntent: Connection refused 2014-06-17T13:27:31.109725+02:00 catral automount[777]: no mounts in table Nothing is mounted at /mnt (as expected) but that Connection refused looks interesting. - - - start fs and systemctl restart autofs 2014-06-17T13:22:51.316111+02:00 catral systemd[1]: Stopping Automounts filesystems on demand... 2014-06-17T13:22:51.448930+02:00 catral systemd[1]: Starting Automounts filesystems on demand... 2014-06-17T13:22:51.455056+02:00 catral automount[777]: autofs stopped 2014-06-17T13:22:51.661719+02:00 catral automount[1538]: Starting automounter version 5.0.8, master map auto.master 2014-06-17T13:22:51.699872+02:00 catral automount[1538]: using kernel protocol version 5.02 2014-06-17T13:22:51.753339+02:00 catral automount[1538]: lookup_nss_read_master: reading master sss auto.master 2014-06-17T13:22:51.797822+02:00 catral automount[1538]: parse_init: parse(sun): init gathered global options: (null) 2014-06-17T13:22:51.843814+02:00 catral automount[1538]: spawn_mount: mtab link detected, passing -n to mount 2014-06-17T13:22:51.998636+02:00 catral automount[1538]: spawn_umount: mtab link detected, passing -n to mount 2014-06-17T13:22:52.657455+02:00 catral automount[1538]: master_do_mount: mounting /home/shared 2014-06-17T13:22:52.665725+02:00 catral automount[1538]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-shared 2014-06-17T13:22:52.678760+02:00 catral automount[1538]: lookup_nss_read_map: reading map sss auto.shared 2014-06-17T13:22:52.690410+02:00 catral automount[1538]: parse_init: parse(sun): init gathered global options: (null) 2014-06-17T13:22:52.706122+02:00 catral automount[1538]: spawn_mount: mtab link detected, passing -n to mount 2014-06-17T13:22:52.807515+02:00 catral automount[1538]: spawn_umount: mtab link detected, passing -n to mount 2014-06-17T13:22:53.148359+02:00 catral automount[1538]: mounted indirect on /home/shared with timeout 600, freq 150 seconds 2014-06-17T13:22:53.181298+02:00 catral automount[1538]: st_ready: st_ready(): state = 0 path /home/shared 2014-06-17T13:22:53.207749+02:00 catral automount[1538]: ghosting enabled 2014-06-17T13:22:53.261914+02:00 catral automount[1538]: master_do_mount: mounting /home/users 2014-06-17T13:22:53.270856+02:00 catral automount[1538]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-users 2014-06-17T13:22:53.318516+02:00 catral automount[1538]: lookup_nss_read_map: reading map sss auto.users 2014-06-17T13:22:53.334778+02:00 catral automount[1538]: parse_init: parse(sun): init gathered global options: (null) 2014-06-17T13:22:53.410896+02:00 catral automount[1538]: mounted indirect on /home/users with timeout 600, freq 150 seconds 2014-06-17T13:22:53.421899+02:00 catral automount[1538]: st_ready: st_ready(): state = 0 path /home/users 2014-06-17T13:22:53.490559+02:00 catral automount[1538]: ghosting enabled 2014-06-17T13:22:53.509380+02:00 catral automount[1538]: master_do_mount: mounting /home/profiles 2014-06-17T13:22:53.536415+02:00 catral automount[1538]: automount_path_to_fifo: fifo name /run/autofs.fifo-home-profiles 2014-06-17T13:22:53.607288+02:00 catral automount[1538]: lookup_nss_read_map: reading map sss auto.profiles 2014-06-17T13:22:53.672954+02:00 catral automount[1538]: parse_init: parse(sun): init gathered global options: (null) 2014-06-17T13:22:53.678043+02:00 catral automount[1538]: mounted indirect on /home/profiles with timeout 600, freq 150 seconds 2014-06-17T13:22:53.721476+02:00 catral automount[1538]: st_ready: st_ready(): state = 0 path /home/profiles 2014-06-17T13:22:53.728079+02:00 catral automount[1538]: ghosting enabled 2014-06-17T13:22:53.746324+02:00 catral systemd[1]: Started Automounts filesystems on demand. result: Still nothing is mounted at /mnt but autofs is working as expected. Thanks for your time. It really doesn't matter. We have a workaround (thanks Linda). Just annoying that with other distros it works. I'll go character by character through the ubuntu config and do a diff. Thanks again, L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 2014-06-17 at 13:52 +0200, lynn wrote: But 10 or so minutes later, it works: systemctl restart autofs and: mount ... //altea/users/julie on /mnt type cifs (rw,relatime,vers=1.0,sec=krb5,cache=strict,multiuser,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,unix,posixpaths,serverino,acl,noperm,rsize=1048576,wsize=65536,actimeo=1) Does it matter whether it's connected with a cable (or not)? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (6)
-
Andrey Borzenkov
-
Cristian Rodríguez
-
Jan Ritzerfeld
-
Linda Walsh
-
lynn
-
Per Jessen