I just updated https://en-test.opensuse.org/ to MediaWiki 1.37.1.
Those with access to our salt repo  already know that the upgrade
needed several changes in config files etc.
The database upgrade took quite some time, but was nearly painless (only
exception: some AbuseFilter database indexes)
Please test if everything still works, and tell me if you find any
Known bug: The new ElasticSearch server is not setup yet, therefore the
search is completely broken and will not find anything.
BTW: en-test is based on a terribly outdated database dump of the
english wiki, therefore don't expect to find anything about the 15.x
releases on it.
I'll import the latest database dump from en.o.o on en-test.o.o (and
also sync file uploads) in a few days to do another test upgrade - but
I'll first give you the chance to find technical errors ;-)
 which translates to "everybody" - I just mirrored the latest version
One piece of advice: if you maintain a C&C server (which is both a
really bad idea and a criminal act and as such, strongly discouraged),
always use a strong password.
It's very unprofessional if your server is cracked this easily.
openSUSE:infrastructure contains some packages that don't have a build
target anymore. (This typically means they were enabled only for older
distributions like Leap 42.3.)
If nobody objects, I'll delete the following packages in a week:
- protobuf (link to Factory)
- python-six and python-six:test (outdated link to Factory)
- python-urllib3 and python-urllib3:test (outdated link to Factory)
- salt (outdated link to Factory, only build target is 15.2, and that is
hefur also doesn't have a build target anymore, but since it's a real
package (and is probably still used on download.o.o), I won't delete it.
Der geistige Horizont ist der Abstand zwischen Brett und Kopf.
I hope, everyone enjoyed the Christmas break.
I took the time to cleanup the network bridge between Nuremberg and Provo - and for some DNS cleanups.
The most interesting one might be: widehat.opensuse.org is gone.
As we run a new hypervisor machine named stonehat.opensuse.org now since a while - and the real HW of the former widehat.o.o is gone - I decided to give the (new) VM a speaking name: rsync.opensuse.org
Reverse DNS, mb and rsync stuff is done. I hope, I found most other important places. Only Salt needs someone who knows how to rename a minion.
More about the new hypervisor (including the generous personal HW sponsoring from our bcache-Code maintainer, Coly Li) and other (sponsoring) news from 2021 will become public on news.o.o soon. We are currently just waiting for the last approvals from some sponsors.
I just would like to follow up the wiki update topic from IRC:
This week I would like to start playing with OBS, going to set up an env for it and start looking into the wiki build. If you have any advice throw it my way ;)
I have a few pretty hectic weeks at work, but by the end of next week I should be good to work on this project. If all goes well I can keep on maintaining things going forward.
If there is a deadline on this please let me know so can try and prioritize things differently.
here are the minutes of today's heroes meeting:
- test package is building
- test installation will probably be updated next week
- meet.o.o can currently only have SUSE admins because it's used for
- meet-test.o.o server can have community admins (use one of the
*.infra.o.o VMs as jump host for ssh login)
- we have membership management again (phpMyAdmin)
- helios update prepared, waiting for a review of the salt MR
- Mirrorcache will implement a redirect in the future which will
redirect the user to a more local mirrorcache instance to make
subsequent requests faster
- Notes from Lars: https://progress.opensuse.org/issues/101851#note-2 -
TODO: JeOS changes need to also go into salt
- Notes from Per on the mailinglist:
- Neal will try to fix the 500s when creating a new issue on code.o.o
- Bernhard will try to find the ticket about the jenkins for openSUSE:
- IRC/matrix bridge is broken,
- datacenter move planned for April, we should have backup instances of
*.o.o in Provo or at QSC to avoid downtime
- backup server is ready - and waiting for admins to back up their
stuff. Please open a ticket, if you want to join the "backup party"
openSUSE Infrastructure Contributor Agreement:
- we base on trust
- should include the common openSUSE principles/guidelines
- we can not require an openSUSE hero to be an openSUSE member - as they
just earn the credits to become a member by working as openSUSE hero -
but we can ask to follow the member guidelines in general
- will be moved by a week to Jan 11th
Meine allerste Festplatte hatte 30 MB, und ich war der King, weil alle
anderen 20 MB hatten. Sie fragten, was ich mit 30 MB wolle, die bekomme
ich doch nie voll. ;) Meine jetzige Graphikkarte hat mehr. ;))
[Bernd Brodesser in suse-linux]
Op vrijdag 3 december 2021 10:33:40 CET schreef Douglas DeMaio:
> Hi heroes,
> We've been having community meetings and some people have expressed trying
> to help out with the Jitsi instance performance. It looks like openSUSE
> Heroes VPN account should in theory be able to access the machine. However,
> only allows people with SSH-keys on it can access it. There is a setup on
> another machine as a test instance. Kind of like how we use to have
> meet.o.o and meet2.o.o.
> This machine is identical to the currently used one, but it's been idling
> for some weeks. Do you think it would be possible to create a new setup on
> a new server (test instance). Maybe have the new one as meettest.o.o and
> the old one as meet.o.o for a while? And when that new one is ready for
> production, change the DNS for both so that current becomes meettest, the
> new one becomes meet. Lars will need the ssh keys from the heroes for the
> I'm including Bill and Knurpht since they have expressed interest in helping
> and will forward to Jens, who has also expressed interest in helping.
Gertjan Lettink a.k.a. Knurpht
openSUSE Forums Team
As you might have seen it on https://status.opensuse.org/ : the (synapse-)matrix service on matrix.o.o is currently down by intention.
As the service configuration - and probably the whole setup - differs in comparison to the package containing the latest (security) fixes, we decided to turn the service off instead of risking anything. Now we're waiting for someone (you?) to update the service...
Dear admins: please keep your services up-to date and secure all the time. We are already trying our best to keep the underlying OS up to date for you. But that does not help, if the admins of the services are not doing their job.
Our current infrastructure policy hasn't changed since 2020. So there shouldn't be anything new in it for our admins. -> please follow the rules.