Mailinglist Archive: yast-devel (54 mails)

< Previous Next >
Re: [yast-devel] Yast Development after SLE 12: Why, What, How Much
On Fri, 13 Jun 2014 15:16:52 +0200
Martin Vidner <mvidner@xxxxxxx> wrote:

On Thu, Jun 12, 2014 at 05:02:24PM +0200, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200
Lukas Ocilka <lukas.ocilka@xxxxxxxx> wrote:

[Ruby YCP - answered in a separate mail]

What we will need is the "Refactoring and Cleanup" (RAC) phase,
to be a regular part of the development. Obviously, result of the
RAC has to end up in SLE 12 codebase. I've already started
talking with the Product Management (PM) and they are
more-or-less aligned with this idea. And of course, I'd love to
see the RAC in openSUSE as well. For that reason, it might make
sense to develop both in the same branch as long as possible (SLE
12 GA maintenance [+ SP1 development?], openSUSE development).

I think it make sense as long as opensuse and SLE12 do not much
diverge in sense of configured stuff. If e.g. parted changed in
opensuse, then it is hard to keep same code ( and similar for other
critical parts like grub2, systemd, wicked etc. ).

OK. Does it make sense with schedules too? openSUSE 13.2 still in
November 2014? Any public comment about SLE12 SP1?

I'd like to improve these interconnected areas:

- fewer bugs
- better test coverage
- better code maintainability in the future

Do we focus on everything? What are our goals for the refactoring?
How do we measure success?

Developer happiness matters to me, but it is hard to quantify. I
really enjoyed the progress spreadsheet we had for YCP Killer, and I
think it is important to find a meaningful metric and see it
improving as we work together. So the most important questions below
are Which parts to refactor and How to measure success.

For me success measurement is also how much developer worry that he
brake something. It usually is solved by tests and it can be measured -
https://github.com/mbj/mutant

So for me reasonable metrics on which we should focus for refactoring:

- test coverage + mutation testing ( to not scare modify code )
- file size ( noone want to read too long files, hard to search, hard
to test, hard to keep all dependencies )
- method size ( hard to understand method that do too much and also
hard to change it with keeping all stuff in mind )
- public interface size ( smaller better, provide good API, that do all
needed and not more, easier to keep things backward compatible ), in
converted code it is number of methods in publish, for new code
public methods on classes -

http://pdepend.org/documentation/software-metrics/number-of-public-methods.html

In general we can be inspired by already existing metrics, e.g. for
object oriented code -
http://agile.csc.ncsu.edu/SEMaterials/OOMetrics.htm


There are many open questions that need to be brainstormed before
we decide and start planning, to name some:

- Which parts deserve the refactoring (e.g., those that we often
touch, those, that are not understandable anymore, buggy, ...)?
Which are your most-favorite ones?

From what I see in current development it is license handling,
slideshow, ui libraries deserve better API that can benefit from
ruby. In general the most important is to refactor parts that will
be changed ( due to bugs or features ). So for me it make sense to
improve parts that we want change, as it speed up developement and
decrease number of bugs.

Some options how to decide WHAT to refactor:

1. Measure code quality everywhere (with metric_fu) and pick the
worst code. 2. Measure code changes (metric_fu churn), assuming what
has changed in the past will need change in the future.
3. Count bugs. Probably hard on a file level but easy on a package
level by counting "bnc" in *.changes. That counts fixed bugs, not
reported ones.
4. Count feature requests (done and pending).

I think 2 and 4 make sense, as 3 is usually just subset of 2 and 1 can
affect code that we do not want to touch and hope noone use it :)

Problem of 2-4 is that it only count final place, but often when I
debug code or considering where to implement it I need to study how it
works now and it often need to read some ugly code and found, that in
the end I need to modify different file used from this ugly code, but I
think it is acceptable tradeoff.


I think that (1) is wrong, as it is perfectly OK to leave bad code
alone as long as it works. For (2) and (3) it should be fairly easy
to automate the numbers and I could do that. I don't know about (4).

- How deep should the refactoring be, we have to keep the current
"API", but what should be considered the API as we might be the
only users?

Depends on situation. I think some parts need just small
refactoring. Some need quite heavy rewrite, as it is horrible mess.

API is definitivelly autoyast XML, also API is what user can do in
UI. Otherwise I think it do not matter.

I agree about the AY API.

HOW MUCH depends on 2 things I think:

1. The amount of code needing refactoring (see What above) and our
team*time resources.
2. Our ability to refactor properly, that is, change code without
fear of regressions, that is, having meaningful tests in place for
the changed code.

sometimes I found that writing tests for old code is so hard, that
easier for me is to change code to better testable form, write tests
after and then do manual testing that discover some bugs, so improve
tests to cover it and then repeat until manual testing approve, that it
works. Of course it have higher risk of regressions, but on other hand
also it is much more effective.


(2) means we should put high priority on testing Autoyast. And get
more familiar with openqa for the interactive installation tests.

Yes, more testing for Autoyast definitivelly help a lot. And it should
be quite easy as it is automatic deployment and we have example
profiles from our customers.

Josef

< Previous Next >
Follow Ups