[opensuse-factory] Re: [opensuse-buildservice] register static UID/GID on openSUSE?
On 04/14/2015 03:04 PM, Tim Serong wrote:
Hi All,
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
Does anyone know if there are still any spare UIDs in the range 0-99? Someone told me there weren't but I thought I'd better raise this here, just in case there are. And if there are any available, how do I reserve one?
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
(Pardon the cross-post, I picked what I hope are the two most suitable lists - happy to drop one or the other, or ask elsewhere as appropriate)
I've just been advised that opensuse-factory might be a better list than opensuse-project for the above question. Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Tim Serong schrieb:
On 04/14/2015 03:04 PM, Tim Serong wrote:
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
Does anyone know if there are still any spare UIDs in the range 0-99? Someone told me there weren't but I thought I'd better raise this here, just in case there are. And if there are any available, how do I reserve one?
There is no defined process to register uids there and I don't think we should start doing that again.
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
Currently the best way to do that is to have the admin pre-create the user before installing the package. There is still no cross distro uid registry. The Idea was taken to LSB but saw no progress: https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin... cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 2015-04-16 10:40, Ludwig Nussel wrote:
Tim Serong schrieb:
On 04/14/2015 03:04 PM, Tim Serong wrote:
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
There is no defined process to register uids there and I don't think we should start doing that again.
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
As long as process credentials are not 64/128-bit UIDs similar to NTSIDs, services ought to cope with different numerical UIDs. NFS made the jump with idmapd, so I am sure ceph can come up with something too. Perhaps even reuse idmapd. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Jan Engelhardt schrieb:
On Thursday 2015-04-16 10:40, Ludwig Nussel wrote:
Tim Serong schrieb:
On 04/14/2015 03:04 PM, Tim Serong wrote:
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
There is no defined process to register uids there and I don't think we should start doing that again.
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
As long as process credentials are not 64/128-bit UIDs similar to NTSIDs, services ought to cope with different numerical UIDs. NFS made the jump with idmapd, so I am sure ceph can come up with something too. Perhaps even reuse idmapd.
I agree. There are still some use cases where same uids are desirable nevertheless. For example to be able to reuse a file system with data on it on different installations. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/16/2015 08:05 PM, Ludwig Nussel wrote:
Jan Engelhardt schrieb:
On Thursday 2015-04-16 10:40, Ludwig Nussel wrote:
Tim Serong schrieb:
On 04/14/2015 03:04 PM, Tim Serong wrote:
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html).
AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
There is no defined process to register uids there and I don't think we should start doing that again.
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
As long as process credentials are not 64/128-bit UIDs similar to NTSIDs, services ought to cope with different numerical UIDs. NFS made the jump with idmapd, so I am sure ceph can come up with something too. Perhaps even reuse idmapd.
I agree. There are still some use cases where same uids are desirable nevertheless. For example to be able to reuse a file system with data on it on different installations.
That's exactly why the ceph project wants a static UID - so you can easily hot swap a disk from one node in the cluster to another node. There'll be a shim to `chown -R` when necessary (e.g.: when moving a disk between distros), but this naturally slow and expensive and they're hoping to avoid it when swapping disks between hosts running the same distro. There's a bit more discussion on this at https://fedorahosted.org/fpc/ticket/524 (request to reserve static UID for Ceph on Fedora). Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Monday 2015-04-20 04:45, Tim Serong wrote:
As long as process credentials are not 64/128-bit UIDs similar to NTSIDs, services ought to cope with different numerical UIDs. NFS made the jump with idmapd, so I am sure ceph can come up with something too. Perhaps even reuse idmapd.
I agree. There are still some use cases where same uids are desirable nevertheless. For example to be able to reuse a file system with data on it on different installations.
That's exactly why the ceph project wants a static UID - so you can easily hot swap a disk from one node in the cluster to another node.
That seems bogus. The data that people generally share in networked fashion is user data (rather than, say, operating system data). These user files (hereby by definition) already have consistent UID allocation, e.g. because the participating import nodes all used a centrally-administered database. The OS files are naturally a jumble, but that is ok, because they are generally not shared. If, for whatever reason, they are to be shared, then one would have to make sure that *all* system user UIDs are also consistently allocated — which is not something that will evaluate to `true` in most installations however. So, giving _just ceph_ a fixed static UID is nothing but a drop on a hotstone. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/20/2015 04:47 PM, Jan Engelhardt wrote:
On Monday 2015-04-20 04:45, Tim Serong wrote:
As long as process credentials are not 64/128-bit UIDs similar to NTSIDs, services ought to cope with different numerical UIDs. NFS made the jump with idmapd, so I am sure ceph can come up with something too. Perhaps even reuse idmapd.
I agree. There are still some use cases where same uids are desirable nevertheless. For example to be able to reuse a file system with data on it on different installations.
That's exactly why the ceph project wants a static UID - so you can easily hot swap a disk from one node in the cluster to another node.
That seems bogus.
The data that people generally share in networked fashion is user data (rather than, say, operating system data).
These user files (hereby by definition) already have consistent UID allocation, e.g. because the participating import nodes all used a centrally-administered database.
Not in this case. I'm not talking about directly user-accessible files, I'm talking about ceph OSDs. A ceph OSD is (approximately) a single disk with an xfs/btrfs filesystem on top. The ceph-osd daemon manages everything on this filesystem; objects stored by clients in a ceph cluster map back (somehow, by magic) to files on an OSD, but the client doesn't see them at this level in the stack. Rather, all files on an OSD are are created and owned by the user ceph-osd runs as (presently root, but in future, the unprivileged user "ceph"). In order to be able to hotswap an OSD from one node to another, the UID for ceph thus needs to be the same across all nodes in the cluster.
The OS files are naturally a jumble, but that is ok, because they are generally not shared. If, for whatever reason, they are to be shared, then one would have to make sure that *all* system user UIDs are also consistently allocated — which is not something that will evaluate to `true` in most installations however. So, giving _just ceph_ a fixed static UID is nothing but a drop on a hotstone.
We're not worried about OS files, just the files owned by the ceph-osd process on the OSD disks, which is to say, all the files backing all the objects stored in the ceph cluster :) Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Monday 2015-04-20 09:12, Tim Serong wrote:
Not in this case. I'm not talking about directly user-accessible files, I'm talking about ceph OSDs. A ceph OSD is (approximately) a single disk with an xfs/btrfs filesystem on top. The ceph-osd daemon manages everything on this filesystem; objects stored by clients in a ceph cluster map back (somehow, by magic) to files on an OSD, but the client doesn't see them at this level in the stack. Rather, all files on an OSD are are created and owned by the user ceph-osd runs as (presently root, but in future, the unprivileged user "ceph"). In order to be able to hotswap an OSD from one node to another, the UID for ceph thus needs to be the same across all nodes in the cluster.
Ah well. In case the disk is wholly owned by ceph, mount -o uid=xxx,gid=xxx will be interesting. Some filesystems support it already, and I remember there were calls to make it an fs-independent VFS feature, though I do not know the current development status. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/20/2015 05:55 PM, Jan Engelhardt wrote:
On Monday 2015-04-20 09:12, Tim Serong wrote:
Not in this case. I'm not talking about directly user-accessible files, I'm talking about ceph OSDs. A ceph OSD is (approximately) a single disk with an xfs/btrfs filesystem on top. The ceph-osd daemon manages everything on this filesystem; objects stored by clients in a ceph cluster map back (somehow, by magic) to files on an OSD, but the client doesn't see them at this level in the stack. Rather, all files on an OSD are are created and owned by the user ceph-osd runs as (presently root, but in future, the unprivileged user "ceph"). In order to be able to hotswap an OSD from one node to another, the UID for ceph thus needs to be the same across all nodes in the cluster.
Ah well. In case the disk is wholly owned by ceph,
mount -o uid=xxx,gid=xxx
will be interesting. Some filesystems support it already, and I remember there were calls to make it an fs-independent VFS feature, though I do not know the current development status.
That looks like it would be ideal, except AFAICT neither xfs nor btrfs support the uid/gid mount options, and these are the only two filesystems practical for use with Ceph :-/ Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Jan Engelhardt schrieb:
On Monday 2015-04-20 09:12, Tim Serong wrote:
Not in this case. I'm not talking about directly user-accessible files, I'm talking about ceph OSDs. A ceph OSD is (approximately) a single disk with an xfs/btrfs filesystem on top. The ceph-osd daemon manages everything on this filesystem; objects stored by clients in a ceph cluster map back (somehow, by magic) to files on an OSD, but the client doesn't see them at this level in the stack. Rather, all files on an OSD are are created and owned by the user ceph-osd runs as (presently root, but in future, the unprivileged user "ceph"). In order to be able to hotswap an OSD from one node to another, the UID for ceph thus needs to be the same across all nodes in the cluster.
Ah well. In case the disk is wholly owned by ceph,
mount -o uid=xxx,gid=xxx
will be interesting. Some filesystems support it already, and I remember there were calls to make it an fs-independent VFS feature, though I do not know the current development status.
For years I tried to get such a patch into ext[234]. There's always discussion but in the end nobdy goes ahead and merges the patches. Last time Ted at least agreed that it's easier to do this in the individual filesystems: http://thread.gmane.org/gmane.linux.documentation/4945 Latest version of the patch is three years old: http://thread.gmane.org/gmane.comp.file-systems.ext4/32284 It's quite some effort to rebase and test this all the time so I can't keep up. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/20/2015 10:08 PM, Ludwig Nussel wrote:
Jan Engelhardt schrieb:
On Monday 2015-04-20 09:12, Tim Serong wrote:
Not in this case. I'm not talking about directly user-accessible files, I'm talking about ceph OSDs. A ceph OSD is (approximately) a single disk with an xfs/btrfs filesystem on top. The ceph-osd daemon manages everything on this filesystem; objects stored by clients in a ceph cluster map back (somehow, by magic) to files on an OSD, but the client doesn't see them at this level in the stack. Rather, all files on an OSD are are created and owned by the user ceph-osd runs as (presently root, but in future, the unprivileged user "ceph"). In order to be able to hotswap an OSD from one node to another, the UID for ceph thus needs to be the same across all nodes in the cluster.
Ah well. In case the disk is wholly owned by ceph,
mount -o uid=xxx,gid=xxx
will be interesting. Some filesystems support it already, and I remember there were calls to make it an fs-independent VFS feature, though I do not know the current development status.
For years I tried to get such a patch into ext[234]. There's always discussion but in the end nobdy goes ahead and merges the patches. Last time Ted at least agreed that it's easier to do this in the individual filesystems: http://thread.gmane.org/gmane.linux.documentation/4945
Latest version of the patch is three years old: http://thread.gmane.org/gmane.comp.file-systems.ext4/32284
It's quite some effort to rebase and test this all the time so I can't keep up.
Hrm. So, given that's a bust, I'm very, very tempted to just take whatever UID/GID the Debian project end up using, and use that on openSUSE and SLES too. It's apparently likely to be 64045 which is well outside any of our defined ranges. Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wednesday 2015-04-22 11:15, Tim Serong wrote:
For years I tried to get such a patch into ext[234]. There's always discussion but in the end nobdy goes ahead and merges the patches. Last time Ted at least agreed that it's easier to do this in the individual filesystems: http://thread.gmane.org/gmane.linux.documentation/4945
Yeah the problem is a larger one, because the option string, after mount(8) has extracted the MS_* flags, is passed verbatim to the filesystem. (and MS_* does not have room to specify a UID)
Latest version of the patch is three years old: http://thread.gmane.org/gmane.comp.file-systems.ext4/32284
It's quite some effort to rebase and test this all the time so I can't keep up.
Hrm. So, given that's a bust, I'm very, very tempted to just take whatever UID/GID the Debian project end up using, and use that on openSUSE and SLES too. It's apparently likely to be 64045 which is well outside any of our defined ranges.
classic system range: 0--99,65534 modern system range: 0--999,65534,4294967294 user range: 1000--{at least 2 million} The exact extent of the user range has not been defined anywhere, it is usually established by practice (and more specifically, realistic practices). 64045 is _well_ within the user range. Sourceforge was one of the early large installations for example where one could observe UIDs above the 100000 mark simply because they have that many registered users. Similarly with the local university here where the student range starts at UID 1 million because everything lower was potentially used in some institute already thanks to the LDAP forest. So, the safest option appears to be something like 4294000000. But that does not solve the problem, because such a number is just as good as one in the system range. In both cases you have to coordinate with distros eventually because everyone WILL just pick a random number at some point. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Jan Engelhardt schrieb:
On Wednesday 2015-04-22 11:15, Tim Serong wrote:
For years I tried to get such a patch into ext[234]. There's always discussion but in the end nobdy goes ahead and merges the patches. Last time Ted at least agreed that it's easier to do this in the individual filesystems: http://thread.gmane.org/gmane.linux.documentation/4945
Yeah the problem is a larger one, because the option string, after mount(8) has extracted the MS_* flags, is passed verbatim to the filesystem. (and MS_* does not have room to specify a UID)
Latest version of the patch is three years old: http://thread.gmane.org/gmane.comp.file-systems.ext4/32284
It's quite some effort to rebase and test this all the time so I can't keep up.
Hrm. So, given that's a bust, I'm very, very tempted to just take whatever UID/GID the Debian project end up using, and use that on openSUSE and SLES too. It's apparently likely to be 64045 which is well outside any of our defined ranges.
classic system range: 0--99,65534 modern system range: 0--999,65534,4294967294 user range: 1000--{at least 2 million}
The exact extent of the user range has not been defined anywhere, it is usually established by practice (and more specifically, realistic practices). 64045 is _well_ within the user range.
Yes and no. The default range for dynamically allocated uids is defined in /etc/login.defs. What happes if you exceed that range is not written down. Debian is the only distro that I know that reserved the window 60000-65533 for their own use: https://www.debian.org/doc/debian-policy/ch-opersys.html#s9.2.1
Sourceforge was one of the early large installations for example where one could observe UIDs above the 100000 mark simply because they have that many registered users. Similarly with the local university here where the student range starts at UID 1 million because everything lower was potentially used in some institute already thanks to the LDAP forest.
So, the safest option appears to be something like 4294000000. But that does not solve the problem, because such a number is just as good as one in the system range. In both cases you have to coordinate with distros eventually because everyone WILL just pick a random number at some point.
There's an attempt here but it needs a driving force: https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin... cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/22/2015 08:04 PM, Ludwig Nussel wrote:
Jan Engelhardt schrieb:
On Wednesday 2015-04-22 11:15, Tim Serong wrote:
For years I tried to get such a patch into ext[234]. There's always discussion but in the end nobdy goes ahead and merges the patches. Last time Ted at least agreed that it's easier to do this in the individual filesystems: http://thread.gmane.org/gmane.linux.documentation/4945
Yeah the problem is a larger one, because the option string, after mount(8) has extracted the MS_* flags, is passed verbatim to the filesystem. (and MS_* does not have room to specify a UID)
Latest version of the patch is three years old: http://thread.gmane.org/gmane.comp.file-systems.ext4/32284
It's quite some effort to rebase and test this all the time so I can't keep up.
Hrm. So, given that's a bust, I'm very, very tempted to just take whatever UID/GID the Debian project end up using, and use that on openSUSE and SLES too. It's apparently likely to be 64045 which is well outside any of our defined ranges.
classic system range: 0--99,65534 modern system range: 0--999,65534,4294967294 user range: 1000--{at least 2 million}
The exact extent of the user range has not been defined anywhere, it is usually established by practice (and more specifically, realistic practices). 64045 is _well_ within the user range.
Yes and no. The default range for dynamically allocated uids is defined in /etc/login.defs. What happes if you exceed that range is not written down. Debian is the only distro that I know that reserved the window 60000-65533 for their own use: https://www.debian.org/doc/debian-policy/ch-opersys.html#s9.2.1
Yeah, that's what I was getting at -- our UID_MAX in /etc/login.defs is 60000. I'm aware that unsigned integers are bigger now than they once were ;) but if there's a hole we can sensibly make use of between 60K-65K, I'm all for it, especially if it works cross-distro...
Sourceforge was one of the early large installations for example where one could observe UIDs above the 100000 mark simply because they have that many registered users. Similarly with the local university here where the student range starts at UID 1 million because everything lower was potentially used in some institute already thanks to the LDAP forest.
So, the safest option appears to be something like 4294000000. But that does not solve the problem, because such a number is just as good as one in the system range. In both cases you have to coordinate with distros eventually because everyone WILL just pick a random number at some point.
There's an attempt here but it needs a driving force: https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin...
cu Ludwig
-- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Wednesday 2015-04-22 12:54, Tim Serong wrote:
classic system range: 0--99,65534 modern system range: 0--999,65534,4294967294 user range: 1000--{at least 2 million}
Yes and no. The default range for dynamically allocated uids is defined in /etc/login.defs.
Which won't help you, because the LDAP tree(*) may be administered from a place where there is a different login.defs. Or the tool ignores login.defs outright. Or there is no login.defs to start with. LDAPAdmin.exe, web-based IDMs, you name it. (*) Or any kind of user database that is made available to multiple systems. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/22/2015 09:07 PM, Jan Engelhardt wrote:
On Wednesday 2015-04-22 12:54, Tim Serong wrote:
classic system range: 0--99,65534 modern system range: 0--999,65534,4294967294 user range: 1000--{at least 2 million}
Yes and no. The default range for dynamically allocated uids is defined in /etc/login.defs.
Which won't help you, because the LDAP tree(*) may be administered from a place where there is a different login.defs. Or the tool ignores login.defs outright. Or there is no login.defs to start with. LDAPAdmin.exe, web-based IDMs, you name it.
(*) Or any kind of user database that is made available to multiple systems.
OK, so what are my options here, given that the ceph project still needs a fixed UID/GID for the ceph user and group? Some ideas: 1) We (openSUSE) can follow Debian's reserved 60-64K range (which also is the proposed LSB solution is to this problem, as Ludwig mentioned before in https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin...), but because we've never reserved that range before, we have risks as you mention above with different login.defs, etc. with making that an official thing. 2) I can ignore the above risk and just copy Debian anyway for our ceph packages, without making this an official process. The rpm %pre script would need to include a guard invocation of `/usr/bin/id $WHATEVER_ID` to make sure it wasn't already in use then spit an error message at the user telling them to manually allocate some other UID/GID in this case. 2) I can copy what Fedora does, assuming they eventually allocate a static UID/GID for Ceph, but that will presumably be somewhere between 100-200, which conflicts with our dynamically allocated system account range of 100-499 (i.e. it may or may not conflict on any given host with existing system users). This can be somewhat mitigated with the guard mentioned in "2" above, but TBH my gut feeling is that on random server systems, we're more likely to hit a conflict in this range than in the 60-64K range (although my gut is not infallible...) 3) I can hope that there is still one UID/GID free in the range 0-99, then unilaterally decide to use it ;) assuming I can find some canonical source for what's already assigned in this range on SLES and openSUSE :( 4) I can pick a random UID between 500-999, which again is outside our usual defined ranges Any other ideas? :) Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday 2015-04-28 08:16, Tim Serong wrote:
OK, so what are my options here, given that the ceph project still needs a fixed UID/GID for the ceph user and group?
1) We (openSUSE) can follow Debian's reserved 60-64K range (which also is the proposed LSB solution is to this problem, as Ludwig mentioned before in https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin...),
Let LSB say that it will allocate numbers from 4294967293 backwards. [4294967294 is already given to "nfsnobody"] Define no explicit range size. (Like how heap and stack grow for towards each other for x86 processes.) Drop the 60-64K range from the spec. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Jan Engelhardt schrieb:
On Tuesday 2015-04-28 08:16, Tim Serong wrote:
OK, so what are my options here, given that the ceph project still needs a fixed UID/GID for the ceph user and group?
1) We (openSUSE) can follow Debian's reserved 60-64K range (which also is the proposed LSB solution is to this problem, as Ludwig mentioned before in https://github.com/LinuxStandardBase/lsb/blob/master/documents/wip/userNamin...),
Let LSB say that it will allocate numbers from 4294967293 backwards. [4294967294 is already given to "nfsnobody"] Define no explicit range size. (Like how heap and stack grow for towards each other for x86 processes.) Drop the 60-64K range from the spec.
Might be a workable solution indeed. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Tim Serong schrieb:
[...] OK, so what are my options here, given that the ceph project still needs a fixed UID/GID for the ceph user and group?
Some ideas:
[...] Any other ideas? :)
5) Don't define a fixed uid on distro level but leave that up to the deployment. That way it would be possible to recommend using the Debian allocated uid without actually enforcing it. The admin just has to make sure to create the user before installing the ceph package. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5; 90409 Nürnberg; Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 04/28/2015 09:52 PM, Ludwig Nussel wrote:
Tim Serong schrieb:
[...] OK, so what are my options here, given that the ceph project still needs a fixed UID/GID for the ceph user and group?
Some ideas:
[...] Any other ideas? :)
5) Don't define a fixed uid on distro level but leave that up to the deployment. That way it would be possible to recommend using the Debian allocated uid without actually enforcing it. The admin just has to make sure to create the user before installing the ceph package.
I'd rather avoid forcing the admin to create the user manually on every host prior to package installation. Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tserong@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 14.04.2015 12:31, Tim Serong wrote:
On 04/14/2015 03:04 PM, Tim Serong wrote:
Hi All,
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
Does anyone know if there are still any spare UIDs in the range 0-99? Someone told me there weren't but I thought I'd better raise this here, just in case there are. And if there are any available, how do I reserve one?
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
I'm not sure if that is supported by openSUSE, but there is also systemd-sysusers (see man sysusers.d or http://0pointer.net/blog/projects/stateless.html ) to create system users/groups. Regards, Tom -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 2015-04-16 13:14, Thomas Bechtold wrote:
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
I'm not sure if that is supported by openSUSE, but there is also systemd-sysusers (see man sysusers.d or http://0pointer.net/blog/projects/stateless.html ) to create system users/groups.
systemd-sysusers is just an improved version of (%pre+useradd). It still cannot guarantee equal UIDs across all (or even just a handful of) systems. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Apr 16, 2015 at 8:14 AM, Thomas Bechtold <tbechtold@suse.com> wrote:
On 14.04.2015 12:31, Tim Serong wrote:
On 04/14/2015 03:04 PM, Tim Serong wrote:
Hi All,
A long time ago, there was a thread regarding UID/GID registration (see http://lists.opensuse.org/opensuse-buildservice/2009-05/msg00221.html). AFAICT the question "how can one register a static UID/GID in the 0-99 range" wasn't answered.
Does anyone know if there are still any spare UIDs in the range 0-99? Someone told me there weren't but I thought I'd better raise this here, just in case there are. And if there are any available, how do I reserve one?
Context: the ceph project wants to reserve a UID/GID - being a distributed storage cluster, it's best if the UID/GIDs remain static across the whole cluster, and the most straightforward way to do this is to hardcode a UID/GID in a useradd/groupadd invocation in the rpm %pre script.
I'm not sure if that is supported by openSUSE, but there is also systemd-sysusers (see man sysusers.d or http://0pointer.net/blog/projects/stateless.html ) to create system users/groups.
No, systemd-sysusers replaces ugly rpm scriptlets that create system users for something sane to make packaging and distribution assembly easier. It cannot assign fixed UID or GID, it is also not suitable for creating any other type of user, group than nologin system users. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
participants (5)
-
Cristian Rodríguez
-
Jan Engelhardt
-
Ludwig Nussel
-
Thomas Bechtold
-
Tim Serong