Hello, Am Dienstag, 20. Juni 2017, 15:39:43 CEST schrieb Lars Vogdt:
back from vacation for the first day - and reading this gives me the impression that I should stay away for longer... ;-)
;-)
On Mon, 19 Jun 2017 22:43:07 +0200 Christian Boltz wrote:
TL;DR Everything is ready from my side, when can we do it? ;-))
Congratulations!!!
So, what do we need to really do the wiki update and migration? - review of the salt code
I guess this is mainly something for Theo?
I'd say everybody can/should review it, even if Theo is probably the one who will finally merge it when he's back from vacation.
- review of the mediawiki and elasticsearch packages
Looks ok to me. Should those packages be merged/moved into openSUSE:infrastructure? IMHO they are not conflicting and can be handled completely independent even there (without the need to have another repo to maintain/watch).
There are two reasons why I asked for a separate repo: - these ~20 packages are only required for the wiki - MediaWiki 1.27 is a LTS release, but it requires an old (unfortunately EOL) elasticsearch version [1]. The separate repo helps to ensure that nobody uses that version accidently.
- 42.3 JeOS images [1]
Fine with me (as I said: the one who does, decides)... ;-)
The images just finished rebuilding (see below). I didn't test them yet, but I'm quite sure they'll work ;-)
- a new VM for the wiki (apache etc.). Note that this VM needs to access the internet to fetch RSS feeds and files from github
Do you already know which external pages need to be reached? Should we open just Port 80 and 443? => I would love to keep the wiki machine as protected as possible, so while it is possible to NAT everything, I would more go for NAT just what is needed..
RSS feeds can be included from everywhere, so opening port 80 and 443 is probably the best option. In theory we could maintain a whitelist and apply more strict firewall rules, but a) I'm not sure if it's worth the effort and b) it's annoying for someone who wants to add a not-yet-whitelisted feed Including files from github obviously will only fetch files from github.
- a new VM for elasticsearch, ideally only reachable from the wiki VM
In Theory easily possible by connecting the two machines directly together. But I'm asking myself if we might need another VM if the search takes too long?
If we hit performance issues, I'd expect them in the wiki VM, not in the elasticsearch VM (hint: most wiki requests don't need the search), so I seriously doubt we'll need another elasticsearch VM.
Or we play with just another VLAN for this... Suggestions welcome.
I know you love all your fancy toys ;-) I'm more a fan of keeping things simple and boring and would use SuSEfirewall on the elasticsearch VM (as soon as I know the IP of the wiki VM). We'll need to allow incoming connections to elasticsearch and ssh. (Actually everything is salted and I never had to login in into my elasticsearch test VM ;-) The salt-minion does only outgoing connections AFAIK, so it shouldn't be necessary to have a firewall rule for it.
- a mysql database for each wiki on the galera cluster (ideally with
names like wiki-en and usernames == database name)
Brings me to the question who is currently maintaining the DB-Cluster as master admin?
AFAIK Darix.
- optional: a "master" mysql user which can be used to create all the
wiki-* users and databases
From my point of view as former DB-Admin: no.
The reason for this idea was that Darix would need to create only one user, and I could then create users for each wiki myself (or maybe even let salt create them). This obviously has pros and cons ;-)
- the latest wiki database and files from Provo. For the real (non-test) move, we also need to set $wgReadOnly in Provo to make the current wiki read-only while we move it.
Something for mmaher or rbrown?
Right. Asking Micah is also an option if they are too busy, but for setting $wgReadOnly this probably means a round through the approval meeting.
- some configuration on login.o.o for each wiki (similar to what we have for en-test.o.o already)
It's not only login.o.o - haproxy needs adaptions, too.
OK, good to know. Speaking about this - it probably makes sense to setup a temporary wikimove.o.o domain which I can use for testing whatever wiki I'm moving on that day (and before switching over the official domain)
- a backup concept, even if we hopefully never need it ;-) I'd propose
- daily database dumps
should be done already on the cluster. The question might be if you want to have a backup on the machine itself, too. This should be easily doable with the credentials of the wiki-database user.
Can someone (Darix?) confirm the "should be done already", please? ;-)
- daily backups of /srv/www/ which basically contains the uploaded
files (+ some small config files etc.). Maybe rsnapshot would be
a good solution since 99% of the uploaded files don't change.
Who should be able to access the backups? What about another virtual disk on the VM that can be used as backup disk (with whatever tool you like)?
The backups (both database and files) are only needed for desaster recovery, so knowing that they exist sounds good enough for me ;-) and I'll just assume that you all are professional enough to know (and/or test ;-) that these backups actually work ;-)
Daily snapshots of all the disks are enabled already on the filer.
Disk snapshots sound like wasting space to me - but since it isn't my disk space... ;-) (yeah, I understand that disk snapshots are an easy way to do backups)
"Really soon" means: We have 36 days left until the 42.3 release, and I'd like to do the wiki update before that - ideally two weeks before the release (= in ~3 weeks) because doing such a major change on release day would be insane.
I know there are quite some dependencies, but I hope everybody involved with one of the requirements will have some time to help with his/her part.
BTW: Does the wiki also sent Emails?
Good point. Yes, it does - I should have mentioned that ;-) IIRC I've seen something about a mail gateway, but I don't remember the details and can't find it in the admin wiki on progress. Any pointers?
Maybe you can setup a coordination meeting to get the main people behind this project and start the organizational stuff?
There's nothing wrong with a meeting, but I'm not sure if we need it ;-) If everybody just does his/her part, that's fine ;-) and for things where timing is important (everything between making a wiki read-only and switching over the DNS entry) I can ping the involved people on IRC. Oh, another question I forgot to ask yesterday: What about the SSH login? Is everything done via FreeIPA already, or do I need to include my SSH key somewhere?
[1] As I already wrote in my "42.3 JeOS" mail two weeks ago, I'd like to use Leap 42.3 from the beginning, even if it's still under
development. I'll happily help to create the JeOS image if someone creates the openSUSE:infrastructure:Images:openSUSE_Leap_42.3 project in OBS
and gives me permissions.
done. Just play along on openSUSE:infrastructure:Images:openSUSE_Leap_42.3
You already copied my 42.3 JeOS package and the project settings, so this is/was mostly a boring task ;-) However, by copying my JeOS package, you also copied the changes I did for local testing [2] - and I'm quite sure we won't use *.cboltz.de as minion IDs in the openSUSE infrastructure ;-) To fix this, I reverted root.tar.xz to the 42.2 tarball. It doesn't include anything that is version-dependent, so it shouldn't need any changes. Regards, Christian Boltz [1] upstream bugreport: https://phabricator.wikimedia.org/T146636 [2] yes, I have a custom JeOS for local testing to make destroying and re-creating VMs even more boring ;-) -- I've been doing this 10.1 test work just like a real user: In other words I never read any release notes or documentation :-) [tomhorsley(at)adelphia.net in opensuse-factory] -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org