On Tue, Jun 20, 2017 at 08:56:38PM +0200, Christian Boltz wrote:
Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
- a mysql database for each wiki on the galera cluster (ideally with
names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
AFAIK Darix.
I got access to it as well recently
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
The mysql cluster is currently manually managed. As soon as we have it saltified, then we'll do that trick with the user to have the DBs created by salt.
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
Right.
Asking Micah is also an option if they are too busy, but for setting $wgReadOnly this probably means a round through the approval meeting.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
OK, good to know.
Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
I can do it, where do you want it to point to?
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-)
I'll check and report back. imho we need more often dumps than daily ones...
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
The disk snapshots are a netapp feature. But I agree with Lars here, the extra disk sounds like a good solution. Just make sure that your backup script mounts/unmounts before/after the backup the disk, to avoid deleting them accidentally
Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
It is not done via freeipa, you'll need to add your key manually -- Theo Chatzimichos <tampakrap@opensuse.org> <tchatzimichos@suse.com> System Administrator SUSE Operations and Services Team