On 03/09/2016 04:09 PM, Greg Freemyer wrote:
[Big Snip]
You seem to overcoming a different problem than the one I have.
True.
My NFS mount should be reliable. The NFS server and the client are both in a data center (in the cloud)
The point I was making wasn't so much about reliability, which was my problem in the coffee shop, but about avoiding bind mount, which is what I was doing with the SUN workstation I started off with for the technique.
This issue apparently came up because one of my container files somehow got deleted. (I haven't looked into that at all. It held backups from a third cloud VM, so I do want it to be reliable.)
Accidentally deleted files are quite another matter.
The "_netdev" is important.
I'm testing now. How long should the _netdev mounts take to mount?
The _netdev is about only mounting when the network is up. If you don't get a mount when there is a '_netdev' in there then you have a network problem. However the bind mount though the loop you describe
# NFS dependent mounts here /srv_new/sftp-container-large /srv/sftp ext4 nofail,loop 0 0 /srv_new/portal_backup_container /home/portal_backup/portal_backup ext4 nofail,loop 0 0
is another matter. What's going to stop that if happening if the NFS mount doesn't happen? Gee, this is so much easier with systemd mount units and explicit and clear dependencies. I don't know if the mount unit generator is dealing with this. Perhaps you can find the relevant units in /var/run/systemd/geneerator/ The NFS might be under /var/run/systemd/generator/remote-fs.target.d/ I'm not sure about the 'loop' ones.
After reboot I immediately logged in: the NFS mount is in place but the _netdev mount of /srv/sftp is not.
Are you saying that you applied to "_netdev" to the entries that had "loop"? No, that doesn't sound right. Moment .... yes that's definitely not right. What do the NFS montoring tools say? nfsstat, nfsiostat
The symlink avoid the bind mount that seems to be giving you problems.
I'm not doing bind mounts. They are loopback mounts. If you don't know what those are, an example is mounting an ISO image to allow files to be accessed inside the ISO.
I know what they are! I use them BUT I don't put them in my fstab!
The biggest thing I have on the NFS partition is a SFTP folder structure. One folder for each of my clients.
# mkdir -p /srv/sftp/<USER>/incoming # mkdir -p /srv/sftp/<USER>/outgoing
For each client I create a Linux account with a /sbin/nogin shell.
# useradd -g sftpusers -d / -s /sbin/nologin <USER>
I make each client the owner of the folder their files are in
# chown -R <USER>:sftpusers /srv/sftp/<USER>/* # chmod 555 /srv/sftp/<USER>/outgoing
Thus I need to have numerous UIDs as owners of files on the NFS mount. I don't think imapd can let me do that.
In the limiting case where you 'own'/admin both ends you can even have the same name/uid on both ends. That makes life very easy; in effect an idmapd is 1:1 and so is redundant. I note Andrei's caveat, but then I've had control of both ends in that I can also set up the name mapper on the 'far' end. The issue, which you don't make clear, is that the "far" end needs to have the distinct set of UID. Let me put it this way: if on machine A you have a huge /etc/passwd file (perhaps implemented by NIS or LDAP) with all of those user IDs .. Ah right, just like in the 1980s where the SUN workstation had only a small ROOTFS and no /home or /usr or the rest, those came via NFS, and the user logged in it triggered the mount of the relevant "home" from the the server, mounted it on /mnt/nfs/home, and there was a symlink from /home to /mnt/nfs/home .... but only the user's home files appared under the mount. We don't quite do it that way today, not since SUN developed PAM See pam_mkhomedir(8) pam_mount(8) <quote> Name pam_mount - A PAM module that can mount volumes for a user session Overview This module is aimed at environments with central file servers that a user wishes to mount on login and unmount on logout, such as (semi-)diskless stations where many users can logon and where statically mounting the entire /home from a server is a security risk, or listing all possible volumes in /etc/fstab is not feasible. ... The module also supports mounting local filesystems of any kind the normal mount utility supports, with extra code to make sure certain volumes are set up properly because often they need more than just a mount call, such as encrypted volumes. This includes SMB/CIFS, FUSE, dm-crypt and LUKS. </quote> That last might include your "loop" situation and <quote> NAME pam_mkhomedir - PAM module to create users home directory SYNOPSIS pam_mkhomedir.so [silent] [umask=mode] [skel=skeldir] DESCRIPTION The pam_mkhomedir PAM module will create a users home directory if it does not exist when the session begins. This allows users to be present in central database (such as NIS, kerberos or LDAP) without using a distributed file system or pre-creating a large number of directories. The skeleton directory (usually /etc/skel/) is used to copy default files and also sets a umask for the creation. The new users home directory will not be removed after logout of the user. </quote>
FYI: My provider specifically doesn't allow that. They force all files created to be owned by one specific UID. I think they do that so they can sell more functional disk space at a higher per GB price. I pay the "backup space" rate.
I overcome the single UID limitation by creating a large container file. I think I did
# dd if=/dev/zero of=/srv_new/sftp-container-large count=1 seek=300GB # mkfs.ext4 /srv_new/sftp-container-large
Then I do the loopback mount I showed in my fstab. With the new mount point I have full ability to create my SFTP folder structure.
Greg -- Greg Freemyer www.IntelligentAvatar.net
-- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org