Hi,
We are changing the development model of Chameleon so it is more reliable. https://github.com/openSUSE/chameleon/issues/18
The master branch (https://static.opensuse.org/chameleon/) is the stable branch, well tested and won't be updated frequently. Used by all production sites.
The dev branch, which we would like to deploy at https://static.opensuse.org/chameleon-dev/ , is the unstable version for testing and can break now and then. Used for testing sites and local development environment.
Need your help to set up the cron job to pull dev branch. Thanks!
--
Guo Yunhe / @guoyunhe / guoyunhe.me
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hello everybody,
I have a question about download.o.o statistics.
So the question is: Is there anyway for our current environment to know
how many times *any* file is being downloaded?
I am asking because I have created some ad-hoc virtual-machine
appliances and it would be very beneficial to know how many times an
appliance gets downloaded (and if possible dissect "new comers" vs
"previous users").
The appliances live in one of the following folders, depending whether
it is for Leap15.2 or Tumbleweed:
1)
https://download.opensuse.org/repositories/Virtualization:/Appliances:/Imag…
2)
https://download.opensuse.org/repositories/Virtualization:/Appliances:/Imag…
and the file names follow the following convention:
machinelearning-appliance.x86_64-*
Beside, in general I would think that even knowing how many times RPMs
are being downloaded (via zypper) could be beneficial to understand
better end-users needs/trends.
Is there anything that could be extrapolated from logs/metrics for
statistics purposes as per my initial question?
Thank you for your consideration.
Best regards,
--
Marco Varlese
Architect Developer Technologies & AI/ML
SUSE Labs
SUSE Software Solutions Italy S.r.l.
(T) +39 02.947.570.06
(M) +39 345.591.51.01
(E) marco.varlese(a)suse.com
(W) https://gotomeet.me/marcovarlese
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi @ll
I finally want to start with the new Email setup for openSUSE - and I'm currently looking for volunteers... :-)
Planned short term:
* set SPF records for openSUSE domains
* install mx1 and mx2.opensuse.org as incoming servers
** use postfix and rspamd
** integrate clamav (and reject messages seen as spam directly)
** integrate the alias table for members
Planned mid term:
* enable DCIM on all outgoing mail servers
Something I missed?
Regards,
Lars
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi
After the latest changes and new databases (hello, forums! ;-), I
decided that it's time to check the current situation of our database
cluster again.
You might know mysqltuner.pl and tuning-primer.sh from the
mysql-tuning-scripts package: the two tools analyze a running
mysql/mariadb and galera cluster and print out statistics and
recommendations (and even more, if you call them with the right
options ;-).
Both tools recommended to increase the "InnoDB buffer pool size". I
cross checked this with an SQL statement stolen from stackexchange[1]:
=> result of all tools: Recommended_InnoDB_Buffer_Pool_Size = 32G
As the nodes inside the cluster just had 16G RAM and already showed
some lags, I decided to follow the recommendation - but also increase
the amount of available RAM on the machines.
The maximum amount of RAM configured/available for each node was 20G.
Good enough to increase the 16G to 20G without downtime, but not enough
for the planned 32G innodb_buffer_pool if it really gets used.
So I decided to set the current amount of RAM for each node to 48G and
added a buffer in the VM configuration up to 64G. This needed a
cold restart of all nodes (and the backup host) - but everything went
fine. Just remember that we already set the systemd timeout value
"TimeoutSec=7200" for the mariadb service, as in worst case (more than
initially 20, now 40min downtime of the mariadb service results in a
SST restore via mariabackup for ~40G of database files), the mariadb
service needs up to 1:30h to start.
So we are now at 48G RAM for each node together with the following
InnoDB settings in my.cnf:
innodb_buffer_pool_size = 32G
innodb_buffer_pool_instances = 12
innodb_doublewrite = 1
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 8M
innodb_log_file_size = 4G
innodb_lock_wait_timeout = 240
innodb_print_all_deadlocks = 1
innodb_io_capacity = 2000
innodb_read_io_threads = 64
innodb_thread_concurrency = 0
innodb_write_io_threads = 64
Thanks for reading so far :-)
From my side, there are currently some todo's and questions left:
* there is a database called "database" on each node, which does not
contain any tables but instead a file called db.opt with:
default-character-set=utf8mb4
default-collation=utf8mb4_general_ci
as content. Does anyone know why this is there?
* Some wiki databases have 3 different collations and 2 different
engines defined now. Note: interestingly not all of them.
Example:
[!!] 3 different collations for database wiki_it
[!!] 2 different engines for database wiki_it
[!!] wiki_it table column(s) has several charsets defined for all text
like column(s). [!!] wiki_it table column(s) has several collations
defined for all text like column(s). [!!] There are 2 storage
engines. Be careful.
Affected wiki databases: wiki_es, wiki_it, wiki_nl, wiki_pt,
wiki_old_de, wiki_zh, wiki_en
and: webforums
When I migrated the wiki databases, this was not the case.
* Some (äh: 8944!) tables also have wrong types set for some fields.
* Also very important: a lot of tables don't have primary keys. This
slows down the synchronization between the nodes. I will see if there
are some guidelines for the mediawiki databases to add indexes to
these tables - but some webforums tables also don't have primary
keys. @Per: maybe you can check this for vB?
* On the opposite side, there are ~20 unused indexes. I will remove
them next week.
I will start with some "ALTER TABLE" operations during the next days,
to fix some of the settings above. Depending on the size of the tables,
this might lead to some (hopefully short) locks of the databases.
With kind regards,
Lars
--
[1]:
https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-inn…
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hello Lars,
On Wed, 18 Mar 2020, Lars Vogdt wrote:
> While... on the other hand... now that we have a prove of concept that
> we can upgrade tumbleweed machines... If you don't mind, I want to put
> this machine on a list of "run zypper dup once a month". If we check
> right afterward that everything still works as expected (=> Monitoring:
> just tell me what we should check) we should be ready to go. Worst
> thing would be that we might loose a few hours, if we notice early
> enough that something is broken and we need to revert to a
> former snapshot state.
The two important "services" on that would be a simple HTTPS check on some
random URL, let's say
https://gcc.opensuse.org/
(better would be
https://gcc.opensuse.org/gcc-old/SPEC/CINT/sb-czerny-head-64-2006/recent.ht…
but the 'gcc-old' part at least changed once over the last 15 years, so
it's only semi-stable),
and a check that would verify rsync via ssh access (that is what got
broken, the webserver continued to work), from inside the SUSE
network. I don't know if you have the capability to do such checking. (I
could add a ssh key for that purpose).
Ciao,
Michael.
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
JFYI: for the moment, I disabled the central syslog server in favor of
the new graylog instance. No need to change anything on the host side:
the syslog server is forwarding the traffic directly into the graylog
queue.
This means that the syslog server does NOT keep a copy of the remote
logs on disk any longer. Instead, everything is pushed directly into a
elasticsearch database for further processing.
This in turn means that the old monitoring check for outdated remote
logs is turned of. In the last weeks, I was working more on checking
why a specific host was not sending logs any longer - and often enough,
these hosts just had nothing to say (as they are planned replacements
that are not active, yet). In the end, I was becoming too lazy to
reconfigure that check again and again - so at least I am not sad about
loosing this check.
For those, who want to get a look at the new frontend for log files:
https://graylog.opensuse.org/
Note: a openSUSE heroes LDAP account (FreeIPA) is needed to log in.
A nice starting point might be the good documentation at:
https://docs.graylog.org/en/3.2/pages/queries.html
Regards,
Lars
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Friday/Saturday, I worked a bit on the matomo package in
network:utilities and switched our productive matomo instance from pure
mod_php to php-fpm. This gave some performance boost for the analytic
section (I did not measure the time for the pure matomo.php script
execution - so this is just an improvement for the guys looking at the
statistics).
For those with a log in, please use
https://beans.opensuse.org/matomo/ in the future. I will keep the old
https://beans.opensuse.org/piwik/ URL for some time, but depending on
the usage, this might get deprecated/removed in the future. (It's just
an alias anyway, but, hey: keep your environment clean starts with
these little details ;-)
Regards,
Lars
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hello,
FYI:
http://opensuse.org and https://opensuse.org redirect to www.o.o. Until
now, this redirect was done by a server (or cluster of servers) in
Provo, which sadly broke today:
https://progress.opensuse.org/issues/64722
Needless to say that we have some experience with opening tickets in
Provo, therefore I just took the quick route and changed the DNS entry -
the opensuse.org redirect now gets done by anna/elsa in Nuremberg.
Possible risk: /openid - but according to the IRC discussion, that risk
should be quite small. And even if I broke it, it would still be better
than having http://opensuse.org broken ;-)
Before:
# host opensuse.orgopensuse.org has address 130.57.66.19
opensuse.org has IPv6 address 2620:113:80c0:8::19
opensuse.org mail is handled by 42 mx1.suse.de.
opensuse.org mail is handled by 42 mx2.suse.de.
After:
host opensuse.orgopensuse.org has address 195.135.221.140
opensuse.org has IPv6 address 2620:113:80c0:8::16
opensuse.org mail is handled by 42 mx2.suse.de.
opensuse.org mail is handled by 42 mx1.suse.de.
The IPs are those of proxy.o.o/redirector.o.o - CNAME doesn't allow
other entries, and we need the MX entries.
Regards,
Christian Boltz
--
Not mentioned in the `known features'-list so far...
[found on https://bugzilla.novell.com/show_bug.cgi?id=152068]
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
For those with VPN access:
http://forum.infra.opensuse.org
Running on SLES12SP5 - Lars installed a VM for me yesterday.
This is still vBulletin 4.2.2p4, raw copy from Provo. It should also be
available at, may or may not quite work:
https://forums-nbg.opensuse.org
My own view of the above is screwed up due to a baseurl =
http://forums-nbg.opensuse.org - yours might work. There is some
caching going on, somewhere.
I have not done anything to the database except import it, including
whatever character set problems it contains.
Next thing to sort out - authentication. iChain anyone?
--
Per Jessen, Zürich (9.4°C)
Member, openSUSE Heroes
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org
Hi
Just if you wonder what happened during the last weeks....
1) DNS
Our FreeIPA instance is finally the one and only official instance for
* opensuse.org
* opensuse.de
* opensuse.fr
The connection to the MF-IT network and servers, as mentioned at
https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/DNS
is history. Only the DNS servers inside the heroes network are left and
do what good DNS servers do: just work.
Each server is answering ~15 requests per second, according to the
statistics.
==> I just want to wait until next month, before I create an
announcement that "openSUSE is now in Heroes hand" - might be that
I need some help with the text and merge request... ;-)
2) Monitoring
I started with
https://monitor.opensuse.org/icingaweb2/
and connected it directly with the LDAP server. Means: people in the
"monitoring-admins" group have full access, people in "monitoring-user"
have normal rights. But please note that there are currently just 2
hosts monitored. The reason: the old icinga setup uses check_mk for
autodiscovery and autogeneration of most checks. This is not possible
any more with icinga2. I want to use Salt as replacement instead.
Instead of "pnp4nagios", the new icinga2 uses Grafana:
https://monitor.opensuse.org/grafana/d/YyV2BduWk/base-metrics?orgId=1&refre…
It's the same instance which is also providing some graphs for:
* our elasticsearch cluster for the wikis
* the Galera cluster
* the PostgreSQL cluster
=> https://monitor.opensuse.org/grafana/
This instance is also connected to our LDAP. Data sources are currently
InfluxDB (for icinga2), PostgreSQL and Prometheus (for Galera).
Instead of just storing the logs of our hosts on a hard drive and
waiting if someone wants to have a look at them (which honestly more or
less never happens), I decided to move forward and installed Graylog
here: https://graylog.opensuse.org/
As - for example -
https://graylog.opensuse.org/dashboards/5e6ea77657c155111a8fbd37 shows,
this makes it much easier to get an overview of "what's currently going
on" on our machines. The filtering and search functionality is IMHO way
easier than the alternative ELK stack (which is also hardware to
maintain as package). I did not setup big dashboards, yet, in the hope
that some other volunteer steps in :-)
Ah: obvious to say that Graylog also uses our LDAP and normally all
monitoring-admins are also Graylog admins. But - as with Grafana - if
you want more than your original rights, just ping.
3) Support
I hope I did not forget/overlooked too many requests for additional
resources or machines. Open (incl. "feedback") progress tickets are
down to 142[1].
4) Mirrors
I tried to reduce the amount of messages sent to the admin-auto mailing
list from provo-mirror.opensuse.org, olaf (our scanner) and pontifex
(aka download.o.o) over the last weeks. This included some (small)
fixes in some packages (did I tell anyone that I don't know python?).
Overall just minor stuff. But if you wondered, you might know whom to
ask now.
5) CaaSP
As nobody was actively maintaining the CaaSP cluster any longer (since
~ half a year), I asked in https://progress.opensuse.org/issues/54977
for someone who wants to step in - but got no feedback. As nobody was
loggin in to the machines as well and their content (with one exception
below) was already migrated to other machines, I shut down the machines
at 1st of March.
The only real issue since then is that we lost our docker containers
with the gitlab runners. I'm sorry for that, but these runners were
outdated (42.3, anyone?) anyway.
I discussed with Ricardo (our Gitlab Guru) and we decided to setup two
independent machines just to host the runners (via docker containers)
in the future (gitlab-runner{1,2}.infra.opensuse.org).
This is the next TODO on the list - I'm currently just waiting for a
time slot from Ricardo to finalize the setup.
---
There is probably a bit more - but my brain is currently blocking my
memory. I just want to give you a short status summary of topics I still
remember.
Stay healthy!
Lars
--
[1]:
https://monitor.opensuse.org/pnp4nagios/graph?host=redmine.infra.opensuse.o…
--
To unsubscribe, e-mail: heroes+unsubscribe(a)opensuse.org
To contact the owner, e-mail: heroes+owner(a)opensuse.org