Hello, Am Donnerstag, 29. Juni 2017, 12:15:35 CEST schrieb Theo Chatzimichos:
On Tue, Jun 20, 2017 at 08:56:38PM +0200, Christian Boltz wrote:
Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
The mysql cluster is currently manually managed. As soon as we have it saltified, then we'll do that trick with the user to have the DBs created by salt.
My idea was that this could be done with salt from the wiki VM, and it would only have permissions for managing wiki-* users and databases. (Obviously this needs support for handling encrypted passwords in salt, unless we want to have cleartext passwords visible to all admins.)
Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
I can do it, where do you want it to point to?
See my other mail.
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-) I'll check and report back. imho we need more often dumps than daily ones...
The damage of loosing one day of wiki edits is quite small IMHO. Being close to a release is an exception, but even then there are maybe 20 or 30 edits per day. In "normal" times, it's more like 5 edits per day in the english wiki, or even no edits for a day or two. But sure, doing backups more often never hurts ;-)
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
The disk snapshots are a netapp feature. But I agree with Lars here, the extra disk sounds like a good solution. Just make sure that your backup script mounts/unmounts before/after the backup the disk, to avoid deleting them accidentally
I could easily do backups with rsnapshot, and because the uploaded files rarely change, the disk space usage would be quite small. The question is more if we really need rsnapshot backups or if the disk snapshots are good enough. But: If we decide to do rsnapshot backups, I'd prefer to setup a separate VM [1] that only runs rsnapshot and fetches the data from the wiki VM (rsync/rsnapshot over SSH). The big advantage of this solution is that nothing needs write access on the backup VM, so even if terrible things happen on the other VMs, they can't destroy the backup. It would also make it extremely unlikely that the backup gets accidently deleted because normally you don't login on the backup VM. rsnapshot will obviously need full read access (= root), but I have a nice solution to restrict this to be read-only so that the backup VM can't do any damage to the VMs it has to backup :-)
Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
It is not done via freeipa, you'll need to add your key manually
Define "manually", please ;-) Salt sets SSH to key-only logins, so how should I login to put my key on it? ;-) The solution I use for my local test VMs is to create ~/.ssh/ and to put my SSH key there with salt. This works perfectly (and I'll happily make it more flexible [2] and push it to gitlab if you want), but somehow it defeats having FreeIPA as a central instance to handle permissions ;-) Regards, Christian Boltz [1] I already have a nice idea for the VM name ;-) [2] what I have now can only handle my own SSH key ;-) -- Meine Katze hat zu der Maus auch gesagt: "Kannst ganz beruhigt sein, ich tu Dir nichts!" Und vom Fressen hat die Katze kein Ton gesagt. [Rolf-Hubert Pobloth in suse-linux] -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org