Mailinglist Archive: yast-devel (54 mails)

< Previous Next >
Re: [yast-devel] Yast Development after SLE 12: Testing; Success
On Thu, Jun 12, 2014 at 05:02:24PM +0200, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200
Lukas Ocilka <lukas.ocilka@xxxxxxxx> wrote:
[...]

- As we will have to add testcases for everything we refactor, should
we also move code from "clients" and "includes" to libraries to be
easier to test?

I think it should not be main reason. Main reason should be to
reusability of code. I often found that same things is done on multiple
places and often it contain bugs. So for me main reason for libraries is
reusability. If you have small libraries, it is easier to reuse it then
full blown clients and includes. Beside include create fat models in
the end, so I do not like it much as it breaks single responsibility
principle. In the end it can end as in Bootloader where each module
have over 100 methods from low level stuff from specific includes to
high level UI dialogs from different include.

For me personally it is much easier to understand if I have e.g. three
libraries that one contain dialog, second current configuration and
third specific configuration element then one module that mix all of it.

We should do whatever helps us write meaningful tests. I think this
will probably involve moving code out of clients and includes, but
that's a means, not a goal.

- How and where to run CI tests for all supported branches?

Because we now uses rake osc:build with own chroot, then for SLE-12 and
any later branch we can just use two CI we currently have - one external
and one internal.

Where: on the existing CI servers. How: just add jobs for the
branches. Fairly straightforward, I think.

- How to measure the success? I'd love to see automatically generated
metrics and code coverage. Metrics could help us to identify the
most-rotten pieces.

I played with https://codeclimate.com/ e.g.
https://codeclimate.com/github/yast/yast-network or
https://codeclimate.com/github/yast/yast-registration

I think it is quite nice tool, but for yast usage is not enough
sensitive as often you improve a lot file and still it is F :)
Another problem is that it is just one metric, so it is not much robust.

Option can be use metric_fu to create regular graphs for code quality
and it contain various metrics ( each metric is own separated tool), so
it works better. We used it e.g. for SLMS -
http://docserv.suse.de/prague/slms/measures/ ( it actually still
works :)

(It does, but it was last run in January, and the X axis labels are
so many to be illegible.)

metric_fu - https://github.com/metricfu/metric_fu ( original one looks
not active, but this part is quite active and they add also other
metrics ).

Other nice tool for coding conventions is e.g. rubocop, which is used
by scc team, that report coding conventions problems.

Whatever tool we use, it is important to automate the repors into
the CI server and produce an (improving) trend graph.

In addition to source code metrics, I guess better code would be
reflected in bug and feature counts.

We should run the metrics at the start and then set a reasonable
number to work toward.

- Where and how to run automatic integration tests? Will openQA help?
We could build our own installation image the same way we did it
for the New Installer, testing this image automatically is just
another logical step.

If we do it before submit to target project it is problematic, as
frequency of our changes are quite high, so we have trouble to build
DVD and also it take some time to run test. I think better way is to
use staging and each failure fix with test unit test. So we prevent
regression and fix only new problems ( like changed underlaying stuff ).

- Additional step is to run such tests for all supported products
since we have enabled users to install updates during installation -
this is connected to refactoring only partly, it's essential even if
we haven't refactored a single line of code


for rspec test it is not problem as we run it in chroot in target
environment. for others it depends on implementation.

Yes, we should do integration and regression tests, as you write. I
don't see a problem but we will know only when we start.

Thanks for starting the discussion! I bugs was starting bugs to see
bugs only bugs in the SLE bugs work bugs.
--
Martin Vidner, Cloud & Systems Management Team
http://en.opensuse.org/User:Mvidner

Kuracke oddeleni v restauraci je jako fekalni oddeleni v bazenu
< Previous Next >