Hi
TL;DR:
- added documentation for Image testing
- prepared some additional machines in Nuremberg
- tuned some QEMU settings
- status.opensuse.org shows some metrics now
Long version:
If you want to start testing our openSUSE infrastructure at home (or on
your Laptop), there is now a documentation in the admin wiki[1], that
describes how you can do this via KVM/QEMU (and the graphical
virt-manager) and our own JeOS image, which is build in
openSUSE:infrastructure:Images:openSUSE_Leap_15.1 jeos - feel free to
enhance the documentation or ask questions.
I created the following machines in the Nuremberg infrastructure:
1) riesling3.infra.opensuse.org for wiki testing on Leap 15.1
2) os-rt.infra.opensuse.org, to host a [Request
Tracker](https://bestpractical.com/request-tracker) installation.
3) nue-ns1.infra.opensuse.org as dedicated (external) DNS server in
Nuremberg
None of the 3 machines above is in production, yet. But they are "base
installed" and should be reachable via Salt.
Every time I was rebooting a machine (while installing kernel updates
or upgrading the OS), I also checked the QEMU settings and upgraded
them to the latest version, added the host-cpu passthrough option and
the QEMU-guest agent. This might give some machines some performance
boost (as for example the sse* CPU flags can now be used) as well as
some additional features for the admins of the virtualization servers.
I did not enable the qemu-guest agent on the virtual machines (yet),
but as I think this is something very useful, will do this later.
Ah, and if you wonder about the additional metrics at
https://status.opensuse.org/ - this is something I did as well. What
you need for this:
1) define a metric in status.opensuse.org
2) get your personal API Token
3) think about something like:
TIMESTAMP=$(date +%s)
VALUE=<whatever_you_want>
METRIC_URL="https://status.opensuse.org//api/v1/metrics/$ID/points"
where ID=$(the number from
https://status.opensuse.org//api/v1/metrics)
4) Execute: curl --silent -H "Content-Type: application/json;" \
-H "X-Cachet-Token: <your_own_token>" \
--request POST --url "$METRIC_URL" \
--data "{\"value\":$VALUE,\"timestamp\":\"$TIMESTAMP\"}"
The latency for software.o.o and www.o.o come from my workstation
(via cronjob pinging the URLs).
These points are open:
* get more testers and contributors for the image and the infrastructure
* bring the new machines into production
* think about an internal CA for our machines
=> NRPE needs certicicates (currently done via script)
=> PostgreSQL and MySQL/Galera need (new) certificates
=> other services might benefit from certificates as well
I hope I did not forget too much :-)
With kind regards,
Lars
--
[1]:
https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Virtual_mac…
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi
TL;DR:
- svn.opensuse.org is migrated to SLE-12-SP5
- git, subversion and viewvc are updated
Long version:
svn.opensuse.org was running SLE-11 and very old subversion and viewvc
packages.
I decided to migrate the machine to SLE-12-SP5, including a package
version update for the devel:tools:scm:svn repository.
Subversion should work again (I ran "svnadmin upgrade $repo"), but I
did not test this for all repos. The new viewvc package works as
expected. Git also works (the machine is hosting the openSUSE kernel
git repo).
These points are open:
* Migrate to Leap 15.x - a new machine inside the heroes network (named
svn2.infra.opensuse.org) already exists
With kind regards,
Lars
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi
TL;DR:
- Upgraded community.infra.opensuse.org to SLE-12-SP5
- linked getmail and nrpe for this to openSUSE:infrastructure
community.opensuse.org was running SLE-11-SP4. While I saw that some
work has been started to migrate the main content of the machine to
some new installations, I decided that it's time to get rid of this old
stuff and migrated the machine to SLE-12-SP5 (yes: there is an option
to migrate further, but I'm not sure if this might break too much).
The official nrpe package is too old for our monitoring setup (using
SSL certificates), so I linked the nrpe package from server:monitoring
into openSUSE:infrastructure for SLE-12-SP5.
These points are open:
* migrate the services there to new Leap 15.x machines
With kind regards,
Lars
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi
TL;DR:
- Galera cluster is upgraded to latest version from server:database
- backups are done each day (03:00 UTC) by mybackup.infra.opensuse.org
now
- backup retention is planned to be 30 days (calculation still ongoing)
Long version:
The Galera cluster inside the openSUSE Heroes network was one of the
last things I finished in 2017. I like to thank everyone who was
keeping the services up, running and up-to date!
As the load on the cluster went high, when the daily backup was
triggered on one node - and my old script did not 100% guarantee that
all backups were consistent, I decided to do some changes.
Now - as before - there are three nodes building one Galera Cluster (in
the end, this means that there are 3 MariaDB nodes that keep each other
in sync permanently). But I put another node right beside this cluster,
running a bare MariaDB installation. This node is not joined into the
cluster - but instead syncs itself as "slave" from one of the Galera
nodes. To be more robust, this slave is running in read-only mode,
which forbits any local changes on the database.
I first tried to sync the node via the new "Global Transaction ID"[1],
which would allow to easily change to another Galera node. But this is
currently not possible, as all Galera nodes use different IDs. So I
ended up to bind "mybackup" to "galera1" as slave directly. I already
changed the firewall and other settings on all nodes, so a switch over
is possible in general, but would need a "CHANGE MASTER TO..." command,
including binlog file and position information. So this can only be
done manually for the moment.
On mybackup, I installed and configured the mysql-backupscript package
(see openSUSE:infrastructure). This package is already enhanced with
pre- and post-scripts, that stop the slave process before the backup
starts - and start it again once the backup is finished. This finally
allows to have completely consistent backups for each InnoDB database
running on the Galera cluster.
Please note that MyISAM databases are NOT synchronized between the
Galera nodes. So every database that should run there needs some
inspection.
These points are open:
* check final space for backup (might either need an extension of the
backup disk - or less backups
* check, if we might be able to get a unique GID from each node, to
allow the mybackup host to use the load-balancer in front for the
backup - instead of relying on one single Galera node
* Testing complete restore
With kind regards,
Lars
--
[1]: https://mariadb.com/kb/en/gtid/
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hello and happy new year!
The next heroes meeting will be on Tuesday (2020-01-07) at 19:00 UTC /
20:00 CET in the #opensuse-admin IRC channel.
See https://progress.opensuse.org/issues/60578 for the topics - there's
already a quite long list, and additional details in the comments.
Please have your status reports prepared (copy&paste welcome) so that we
get all topics covered.
Regards,
Christian Boltz
--
Please do not think so much about licenses, it will just make
your head explode if not carefully studied over the years ;)
[Marcus Meissner in opensuse-packaging]
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org