[heroes] wiki update/move status - and an optimistic plan
Hello, I'd like to give a status update of the wiki update and move to Nuremberg. TL;DR Everything is ready from my side, when can we do it? ;-)) And now the long version: All RPMs are ready in openSUSE:infrastructure:wiki (see my mail from saturday for details). The salt code to setup the wiki (webserver) and the elasticsearch VM is ready since yesterday, see my merge requests (and review them ;-) https://gitlab.opensuse.org/infra/salt/merge_requests #16, #17 and #18 I played quite a bit with some VMs to test everything, and enjoyed destroying the VMs and letting salt re-create them in no time :-) (fun fact: re-importing the database dump needs more time than the salt highstate run to setup the VM) So, what do we need to really do the wiki update and migration? - review of the salt code - review of the mediawiki and elasticsearch packages - 42.3 JeOS images [1] - a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github - a new VM for elasticsearch, ideally only reachable from the wiki VM - some crazy ideas how to name these VMs [2] - a mysql database for each wiki on the galera cluster (ideally with names like wiki-en and usernames == database name) - optional: a "master" mysql user which can be used to create all the wiki-* users and databases - the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it. - some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already) - a backup concept, even if we hopefully never need it ;-) I'd propose - daily database dumps - daily backups of /srv/www/ which basically contains the uploaded files (+ some small config files etc.). Maybe rsnapshot would be a good solution since 99% of the uploaded files don't change. - no backup of the elasticsearch VM needed - if it really breaks, setting it up from scratch with salt is easy and re-creating the search index can be done within a few hours - and finally some DNS changes ;-) There were some changes since setting up the current en-test.o.o, therefore I'd like to do a "final" test setup with the latest (english) wiki database and files on fresh VMs to ensure everything works. If nothing breaks for a week or so, we can start the real move with the english wiki as the first step. So far for the status und the requirements. Now let me talk about the optimistic plan ;-) I'd like to do all this really soon[tm]. "Really soon" means: We have 36 days left until the 42.3 release, and I'd like to do the wiki update before that - ideally two weeks before the release (= in ~3 weeks) because doing such a major change on release day would be insane. I know there are quite some dependencies, but I hope everybody involved with one of the requirements will have some time to help with his/her part. Oh, BTW: if this plan works as I hope, it would give the marketing team a good opportunity in the 42.3 release announement: "look, our wiki is already running on 42.3" - and this will also give the Heroes a very good visibility :-) Regards, Christian Boltz PS: even if it might look hand-picked - this mail has a _random_ signature ;-) [1] As I already wrote in my "42.3 JeOS" mail two weeks ago, I'd like to use Leap 42.3 from the beginning, even if it's still under development. I'll happily help to create the JeOS image if someone creates the openSUSE:infrastructure:Images:openSUSE_Leap_42.3 project in OBS and gives me permissions. [2] do we still require disney character names? Otherwise I'd like to run the wiki on Rieslingschorle, which would mean"Riesling" (wiki VM) and "water" [3] (elasticsearch VM) as VM names ;-) [3] actually carbonated mineral water, but that's too long for a VM name -- Development is still fast. First I said it went from Ferrari to Lada, but that is not true. It went from Space Shuttle to Ferrari... slower...but still extremely fast. [Azerion in opensuse-factory] -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
Christian Boltz wrote:
Hello,
I'd like to give a status update of the wiki update and move to Nuremberg.
TL;DR Everything is ready from my side, when can we do it? ;-))
Best summary I've read in a long time! -- Per Jessen, Zürich (23.8°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
Hi back from vacation for the first day - and reading this gives me the impression that I should stay away for longer... ;-) On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
TL;DR Everything is ready from my side, when can we do it? ;-))
Congratulations!!!
So, what do we need to really do the wiki update and migration? - review of the salt code
I guess this is mainly something for Theo?
- review of the mediawiki and elasticsearch packages
Looks ok to me. Should those packages be merged/moved into openSUSE:infrastructure? IMHO they are not conflicting and can be handled completely independent even there (without the need to have another repo to maintain/watch).
- 42.3 JeOS images [1]
Fine with me (as I said: the one who does, decides)... ;-)
- a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github
Do you already know which external pages need to be reached? Should we open just Port 80 and 443? => I would love to keep the wiki machine as protected as possible, so while it is possible to NAT everything, I would more go for NAT just what is needed..
- a new VM for elasticsearch, ideally only reachable from the wiki VM
In Theory easily possible by connecting the two machines directly together. But I'm asking myself if we might need another VM if the search takes too long? Or we play with just another VLAN for this... Suggestions welcome.
- a mysql database for each wiki on the galera cluster (ideally with names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
- optional: a "master" mysql user which can be used to create all the wiki-* users and databases
From my point of view as former DB-Admin: no.
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
- a backup concept, even if we hopefully never need it ;-) I'd propose - daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
- daily backups of /srv/www/ which basically contains the uploaded files (+ some small config files etc.). Maybe rsnapshot would be a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)? Daily snapshots of all the disks are enabled already on the filer.
"Really soon" means: We have 36 days left until the 42.3 release, and I'd like to do the wiki update before that - ideally two weeks before the release (= in ~3 weeks) because doing such a major change on release day would be insane.
I know there are quite some dependencies, but I hope everybody involved with one of the requirements will have some time to help with his/her part.
BTW: Does the wiki also sent Emails? Maybe you can setup a coordination meeting to get the main people behind this project and start the organizational stuff?
[1] As I already wrote in my "42.3 JeOS" mail two weeks ago, I'd like to use Leap 42.3 from the beginning, even if it's still under development. I'll happily help to create the JeOS image if someone creates the openSUSE:infrastructure:Images:openSUSE_Leap_42.3 project in OBS and gives me permissions.
done. Just play along on openSUSE:infrastructure:Images:openSUSE_Leap_42.3 with kind regards, Lars -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
Hello, Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
back from vacation for the first day - and reading this gives me the impression that I should stay away for longer... ;-)
;-)
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
TL;DR Everything is ready from my side, when can we do it? ;-))
Congratulations!!!
So, what do we need to really do the wiki update and migration? - review of the salt code
I guess this is mainly something for Theo?
I'd say everybody can/should review it, even if Theo is probably the one who will finally merge it when he's back from vacation.
- review of the mediawiki and elasticsearch packages
Looks ok to me. Should those packages be merged/moved into openSUSE:infrastructure? IMHO they are not conflicting and can be handled completely independent even there (without the need to have another repo to maintain/watch).
There are two reasons why I asked for a separate repo: - these ~20 packages are only required for the wiki - MediaWiki 1.27 is a LTS release, but it requires an old (unfortunately EOL) elasticsearch version [1]. The separate repo helps to ensure that nobody uses that version accidently.
- 42.3 JeOS images [1]
Fine with me (as I said: the one who does, decides)... ;-)
The images just finished rebuilding (see below). I didn't test them yet, but I'm quite sure they'll work ;-)
- a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github
Do you already know which external pages need to be reached? Should we open just Port 80 and 443? => I would love to keep the wiki machine as protected as possible, so while it is possible to NAT everything, I would more go for NAT just what is needed..
RSS feeds can be included from everywhere, so opening port 80 and 443 is probably the best option. In theory we could maintain a whitelist and apply more strict firewall rules, but a) I'm not sure if it's worth the effort and b) it's annoying for someone who wants to add a not-yet-whitelisted feed Including files from github obviously will only fetch files from github.
- a new VM for elasticsearch, ideally only reachable from the wiki VM
In Theory easily possible by connecting the two machines directly together. But I'm asking myself if we might need another VM if the search takes too long?
If we hit performance issues, I'd expect them in the wiki VM, not in the elasticsearch VM (hint: most wiki requests don't need the search), so I seriously doubt we'll need another elasticsearch VM.
Or we play with just another VLAN for this... Suggestions welcome.
I know you love all your fancy toys ;-) I'm more a fan of keeping things simple and boring and would use SuSEfirewall on the elasticsearch VM (as soon as I know the IP of the wiki VM). We'll need to allow incoming connections to elasticsearch and ssh. (Actually everything is salted and I never had to login in into my elasticsearch test VM ;-) The salt-minion does only outgoing connections AFAIK, so it shouldn't be necessary to have a firewall rule for it.
- a mysql database for each wiki on the galera cluster (ideally with
names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
AFAIK Darix.
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
Right. Asking Micah is also an option if they are too busy, but for setting $wgReadOnly this probably means a round through the approval meeting.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
OK, good to know. Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-)
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
"Really soon" means: We have 36 days left until the 42.3 release, and I'd like to do the wiki update before that - ideally two weeks before the release (= in ~3 weeks) because doing such a major change on release day would be insane.
I know there are quite some dependencies, but I hope everybody involved with one of the requirements will have some time to help with his/her part.
BTW: Does the wiki also sent Emails?
Good point. Yes, it does - I should have mentioned that ;-) IIRC I've seen something about a mail gateway, but I don't remember the details and can't find it in the admin wiki on progress. Any pointers?
Maybe you can setup a coordination meeting to get the main people behind this project and start the organizational stuff?
There's nothing wrong with a meeting, but I'm not sure if we need it ;-) If everybody just does his/her part, that's fine ;-) and for things where timing is important (everything between making a wiki read-only and switching over the DNS entry) I can ping the involved people on IRC. Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
[1] As I already wrote in my "42.3 JeOS" mail two weeks ago, I'd like to use Leap 42.3 from the beginning, even if it's still under
development. I'll happily help to create the JeOS image if someone creates the openSUSE:infrastructure:Images:openSUSE_Leap_42.3 project in OBS
and gives me permissions.
done. Just play along on openSUSE:infrastructure:Images:openSUSE_Leap_42.3
You already copied my 42.3 JeOS package and the project settings, so this is/was mostly a boring task ;-) However, by copying my JeOS package, you also copied the changes I did for local testing [2] - and I'm quite sure we won't use *.cboltz.de as minion IDs in the openSUSE infrastructure ;-) To fix this, I reverted root.tar.xz to the 42.2 tarball. It doesn't include anything that is version-dependent, so it shouldn't need any changes. Regards, Christian Boltz [1] upstream bugreport: https://phabricator.wikimedia.org/T146636 [2] yes, I have a custom JeOS for local testing to make destroying and re-creating VMs even more boring ;-) -- I've been doing this 10.1 test work just like a real user: In other words I never read any release notes or documentation :-) [tomhorsley(at)adelphia.net in opensuse-factory] -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
Hello, Gesendet: Dienstag, 20. Juni 2017 um 20:56 Uhr Von: "Christian Boltz" <opensuse@cboltz.de> An: heroes@opensuse.org Betreff: Re: [heroes] wiki update/move status - and an optimistic plan Hello, Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
back from vacation for the first day - and reading this gives me the impression that I should stay away for longer... ;-)
;-)
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
TL;DR Everything is ready from my side, when can we do it? ;-))
Congratulations!!!
So, what do we need to really do the wiki update and migration? - review of the salt code
I guess this is mainly something for Theo?
I'd say everybody can/should review it, even if Theo is probably the one who will finally merge it when he's back from vacation.
- review of the mediawiki and elasticsearch packages
Looks ok to me. Should those packages be merged/moved into openSUSE:infrastructure? IMHO they are not conflicting and can be handled completely independent even there (without the need to have another repo to maintain/watch).
There are two reasons why I asked for a separate repo: - these ~20 packages are only required for the wiki - MediaWiki 1.27 is a LTS release, but it requires an old (unfortunately EOL) elasticsearch version [1]. The separate repo helps to ensure that nobody uses that version accidently.
- 42.3 JeOS images [1]
Fine with me (as I said: the one who does, decides)... ;-)
The images just finished rebuilding (see below). I didn't test them yet, but I'm quite sure they'll work ;-)
- a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github
Do you already know which external pages need to be reached? Should we open just Port 80 and 443? => I would love to keep the wiki machine as protected as possible, so while it is possible to NAT everything, I would more go for NAT just what is needed..
RSS feeds can be included from everywhere, so opening port 80 and 443 is probably the best option. In theory we could maintain a whitelist and apply more strict firewall rules, but a) I'm not sure if it's worth the effort and b) it's annoying for someone who wants to add a not-yet-whitelisted feed Including files from github obviously will only fetch files from github.
- a new VM for elasticsearch, ideally only reachable from the wiki VM
In Theory easily possible by connecting the two machines directly together. But I'm asking myself if we might need another VM if the search takes too long?
If we hit performance issues, I'd expect them in the wiki VM, not in the elasticsearch VM (hint: most wiki requests don't need the search), so I seriously doubt we'll need another elasticsearch VM. I have got experience with JBoss and elasticsearch on one VM and I can say: That creates a lot of load! We tried that, but the search can use the same memory like the application, if somebody (or more) is using the search. So we used a separate VM for the search any time and after that all was working fine. :-) You can use one elasticsearch for different application servers, too. There isn't a big difference for the elasticsearch server at the end. You can see the biggest difference (without or with elaticsearch) on the application server.
Or we play with just another VLAN for this... Suggestions welcome.
I know you love all your fancy toys ;-) I'm more a fan of keeping things simple and boring and would use SuSEfirewall on the elasticsearch VM (as soon as I know the IP of the wiki VM). We'll need to allow incoming connections to elasticsearch and ssh. (Actually everything is salted and I never had to login in into my elasticsearch test VM ;-) The salt-minion does only outgoing connections AFAIK, so it shouldn't be necessary to have a firewall rule for it. +1
- a mysql database for each wiki on the galera cluster (ideally with
names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
AFAIK Darix.
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
Right. Asking Micah is also an option if they are too busy, but for setting $wgReadOnly this probably means a round through the approval meeting.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
OK, good to know. Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-)
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
"Really soon" means: We have 36 days left until the 42.3 release, and I'd like to do the wiki update before that - ideally two weeks before the release (= in ~3 weeks) because doing such a major change on release day would be insane.
I know there are quite some dependencies, but I hope everybody involved with one of the requirements will have some time to help with his/her part.
BTW: Does the wiki also sent Emails?
Good point. Yes, it does - I should have mentioned that ;-) IIRC I've seen something about a mail gateway, but I don't remember the details and can't find it in the admin wiki on progress. Any pointers?
Maybe you can setup a coordination meeting to get the main people behind this project and start the organizational stuff?
There's nothing wrong with a meeting, but I'm not sure if we need it ;-) If everybody just does his/her part, that's fine ;-) and for things where timing is important (everything between making a wiki read-only and switching over the DNS entry) I can ping the involved people on IRC. Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
[1] As I already wrote in my "42.3 JeOS" mail two weeks ago, I'd like to use Leap 42.3 from the beginning, even if it's still under
development. I'll happily help to create the JeOS image if someone creates the openSUSE:infrastructure:Images:openSUSE_Leap_42.3 project in OBS
and gives me permissions.
done. Just play along on openSUSE:infrastructure:Images:openSUSE_Leap_42.3
You already copied my 42.3 JeOS package and the project settings, so this is/was mostly a boring task ;-) However, by copying my JeOS package, you also copied the changes I did for local testing [2] - and I'm quite sure we won't use *.cboltz.de as minion IDs in the openSUSE infrastructure ;-) To fix this, I reverted root.tar.xz to the 42.2 tarball. It doesn't include anything that is version-dependent, so it shouldn't need any changes. Regards, Christian Boltz [1] upstream bugreport: https://phabricator.wikimedia.org/T146636 [2] yes, I have a custom JeOS for local testing to make destroying and re-creating VMs even more boring ;-) -- I've been doing this 10.1 test work just like a real user: In other words I never read any release notes or documentation :-) [tomhorsley(at)adelphia.net in opensuse-factory] -- Best regards, Sarah -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
On Tue, Jun 20, 2017 at 08:56:38PM +0200, Christian Boltz wrote:
Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
- a mysql database for each wiki on the galera cluster (ideally with
names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
AFAIK Darix.
I got access to it as well recently
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
The mysql cluster is currently manually managed. As soon as we have it saltified, then we'll do that trick with the user to have the DBs created by salt.
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
Right.
Asking Micah is also an option if they are too busy, but for setting $wgReadOnly this probably means a round through the approval meeting.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
OK, good to know.
Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
I can do it, where do you want it to point to?
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-)
I'll check and report back. imho we need more often dumps than daily ones...
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
The disk snapshots are a netapp feature. But I agree with Lars here, the extra disk sounds like a good solution. Just make sure that your backup script mounts/unmounts before/after the backup the disk, to avoid deleting them accidentally
Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
It is not done via freeipa, you'll need to add your key manually -- Theo Chatzimichos <tampakrap@opensuse.org> <tchatzimichos@suse.com> System Administrator SUSE Operations and Services Team
Hello, Am Donnerstag, 29. Juni 2017, 12:15:35 CEST schrieb Theo Chatzimichos:
On Tue, Jun 20, 2017 at 08:56:38PM +0200, Christian Boltz wrote:
Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
The mysql cluster is currently manually managed. As soon as we have it saltified, then we'll do that trick with the user to have the DBs created by salt.
My idea was that this could be done with salt from the wiki VM, and it would only have permissions for managing wiki-* users and databases. (Obviously this needs support for handling encrypted passwords in salt, unless we want to have cleartext passwords visible to all admins.)
Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
I can do it, where do you want it to point to?
See my other mail.
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-) I'll check and report back. imho we need more often dumps than daily ones...
The damage of loosing one day of wiki edits is quite small IMHO. Being close to a release is an exception, but even then there are maybe 20 or 30 edits per day. In "normal" times, it's more like 5 edits per day in the english wiki, or even no edits for a day or two. But sure, doing backups more often never hurts ;-)
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
The disk snapshots are a netapp feature. But I agree with Lars here, the extra disk sounds like a good solution. Just make sure that your backup script mounts/unmounts before/after the backup the disk, to avoid deleting them accidentally
I could easily do backups with rsnapshot, and because the uploaded files rarely change, the disk space usage would be quite small. The question is more if we really need rsnapshot backups or if the disk snapshots are good enough. But: If we decide to do rsnapshot backups, I'd prefer to setup a separate VM [1] that only runs rsnapshot and fetches the data from the wiki VM (rsync/rsnapshot over SSH). The big advantage of this solution is that nothing needs write access on the backup VM, so even if terrible things happen on the other VMs, they can't destroy the backup. It would also make it extremely unlikely that the backup gets accidently deleted because normally you don't login on the backup VM. rsnapshot will obviously need full read access (= root), but I have a nice solution to restrict this to be read-only so that the backup VM can't do any damage to the VMs it has to backup :-)
Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
It is not done via freeipa, you'll need to add your key manually
Define "manually", please ;-) Salt sets SSH to key-only logins, so how should I login to put my key on it? ;-) The solution I use for my local test VMs is to create ~/.ssh/ and to put my SSH key there with salt. This works perfectly (and I'll happily make it more flexible [2] and push it to gitlab if you want), but somehow it defeats having FreeIPA as a central instance to handle permissions ;-) Regards, Christian Boltz [1] I already have a nice idea for the VM name ;-) [2] what I have now can only handle my own SSH key ;-) -- Meine Katze hat zu der Maus auch gesagt: "Kannst ganz beruhigt sein, ich tu Dir nichts!" Und vom Fressen hat die Katze kein Ton gesagt. [Rolf-Hubert Pobloth in suse-linux] -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
Hello, I'm commenting on the tasks that are addressed to me On Mon, Jun 19, 2017 at 10:43:07PM +0200, Christian Boltz wrote:
So, what do we need to really do the wiki update and migration? - review of the salt code
Done and merged
- review of the mediawiki and elasticsearch packages
Done, found only minor issues, imho we can proceed
- 42.3 JeOS images [1]
Done and reviewed, thanks for it
- a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github - a new VM for elasticsearch, ideally only reachable from the wiki VM - some crazy ideas how to name these VMs [2]
So you need 2 VMs in total? I'd say file a ticket with names and resources needed for each VM, and assign it to me.
- a mysql database for each wiki on the galera cluster (ideally with names like wiki-en and usernames == database name)
TODO for me
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
TODO for me, I'll do that after we have the VMs in place
- and finally some DNS changes ;-)
Tell me exactly what DNS changes you need please
[2] do we still require disney character names? Otherwise I'd like to run the wiki on Rieslingschorle, which would mean"Riesling" (wiki VM) and "water" [3] (elasticsearch VM) as VM names ;-)
[3] actually carbonated mineral water, but that's too long for a VM name
The disney name pattern has been overruled at one of our previous team meetings. The new policy now is to use whatever name, as long as it is unique and it is not tight to the service name. -- Theo Chatzimichos <tampakrap@opensuse.org> <tchatzimichos@suse.com> System Administrator SUSE Operations and Services Team
Hello, Am Donnerstag, 29. Juni 2017, 11:51:00 CEST schrieb Theo Chatzimichos:
I'm commenting on the tasks that are addressed to me
On Mon, Jun 19, 2017 at 10:43:07PM +0200, Christian Boltz wrote:
- a new VM for the wiki (apache etc.). Note that this VM needs to access> the internet to fetch RSS feeds and files from github
- a new VM for elasticsearch, ideally only reachable from the wiki VM - some crazy ideas how to name these VMs [2]
So you need 2 VMs in total? I'd say file a ticket with names and resources needed for each VM, and assign it to me.
Done, https://progress.opensuse.org/issues/20166 Will you create the pillar/id/* files or should I submit them?
- a mysql database for each wiki on the galera cluster (ideally with names like wiki-en and usernames == database name)
TODO for me
For now, please create the databases and users - wiki-en-test - wiki-en Feel free to use long and random passwords [1], I'll rarely type them ;-) You can send me the database login details by encrypted mail or put them in a file on riesling.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
TODO for me, I'll do that after we have the VMs in place
- and finally some DNS changes ;-)
Tell me exactly what DNS changes you need please
First, please setup wikimove.opensuse.org so that it points to the new wiki VM (riesling) via the login proxy. I'll use wikimove for whatever wiki I move (so first en-test, a few days later en, then one language wiki after the other) - switching this in the apache config ist much easier and faster than setting up separate en-move, de-move etc. subdomains. I'll do a final test setup for the english wiki. When this is done, en-test needs to be switched to riesling (still via the login proxy) and I'll send a final call for testing. If nothing terrible happens, I'll migrate the english production wiki a week later. Regards, Christian Boltz [1] just FYI, the database passwords I typically use look like this: 1sU3zdEN1rTIefTM4LCGaWIIeiPvSAKZEXl1AsBM Such passwords are also very helpful to make a hacked mail account secure again - and customers learn to use secure passwords when you tell them their new password on the phone *eg* --
[...] is currently down due to a failure in the NAS system. [...] your NAS (network attached storage) Oh. I thought it stood for Networked Adrian Schröter :D [> Adrian Schröter and Jean Delvare in opensuse-buildservice]
-- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org
participants (5)
-
Christian Boltz
-
Lars Vogdt
-
Per Jessen
-
Sarah-Julia Kriesch
-
Theo Chatzimichos