On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
Hi,
The current Yast code that we will have in SLE 12 is partly automagically translated from YCP to kind of Ruby (also called RYCP), is still using many YCP-like methods for backward compatibility. Partly - because we have already cleaned up some pieces when touching them, following the Boy-Scout Rule [#1] and so the feel quite OK. Of course, we will not be able to cleanup everything we wanted and when SLE 12 is out, we will have to maintain the codebase for years.
What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development. Obviously, result of the RAC has to end up in SLE 12 codebase. I've already started talking with the Product Management (PM) and they are more-or-less aligned with this idea. And of course, I'd love to see the RAC in openSUSE as well. For that reason, it might make sense to develop both in the same branch as long as possible (SLE 12 GA maintenance [+ SP1 development?], openSUSE development).
I think it make sense as long as opensuse and SLE12 do not much diverge in sense of configured stuff. If e.g. parted changed in opensuse, then it is hard to keep same code ( and similar for other critical parts like grub2, systemd, wicked etc. ).
I'd like to improve these interconnected areas:
- fewer bugs - better test coverage - better code maintainability in the future
There are many open questions that need to be brainstormed before we decide and start planning, to name some:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
From what I see in current development it is license handling, slideshow, ui libraries deserve better API that can benefit from ruby. In general the most important is to refactor parts that will be changed ( due to bugs or features ). So for me it make sense to improve parts that we want change, as it speed up developement and decrease number of bugs.
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Depends on situation. I think some parts need just small refactoring. Some need quite heavy rewrite, as it is horrible mess. API is definitivelly autoyast XML, also API is what user can do in UI. Otherwise I think it do not matter.
- As we will have to add testcases for everything we refactor, should we also move code from "clients" and "includes" to libraries to be easier to test?
I think it should not be main reason. Main reason should be to reusability of code. I often found that same things is done on multiple places and often it contain bugs. So for me main reason for libraries is reusability. If you have small libraries, it is easier to reuse it then full blown clients and includes. Beside include create fat models in the end, so I do not like it much as it breaks single responsibility principle. In the end it can end as in Bootloader where each module have over 100 methods from low level stuff from specific includes to high level UI dialogs from different include. For me personally it is much easier to understand if I have e.g. three libraries that one contain dialog, second current configuration and third specific configuration element then one module that mix all of it.
- How and where to run CI tests for all supported branches?
Because we now uses rake osc:build with own chroot, then for SLE-12 and any later branch we can just use two CI we currently have - one external and one internal.
- How to measure the success? I'd love to see automatically generated metrics and code coverage. Metrics could help us to identify the most-rotten pieces.
I played with https://codeclimate.com/ e.g. https://codeclimate.com/github/yast/yast-network or https://codeclimate.com/github/yast/yast-registration I think it is quite nice tool, but for yast usage is not enough sensitive as often you improve a lot file and still it is F :) Another problem is that it is just one metric, so it is not much robust. Option can be use metric_fu to create regular graphs for code quality and it contain various metrics ( each metric is own separated tool), so it works better. We used it e.g. for SLMS - http://docserv.suse.de/prague/slms/measures/ ( it actually still works :) metric_fu - https://github.com/metricfu/metric_fu ( original one looks not active, but this part is quite active and they add also other metrics ). Other nice tool for coding conventions is e.g. rubocop, which is used by scc team, that report coding conventions problems.
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
If we do it before submit to target project it is problematic, as frequency of our changes are quite high, so we have trouble to build DVD and also it take some time to run test. I think better way is to use staging and each failure fix with test unit test. So we prevent regression and fix only new problems ( like changed underlaying stuff ).
- Additional step is to run such tests for all supported products since we have enabled users to install updates during installation - this is connected to refactoring only partly, it's essential even if we haven't refactored a single line of code
for rspec test it is not problem as we run it in chroot in target environment. for others it depends on implementation. Josef
A few buzzwords for the fun :)
- automation (don't do manually what you don't need to do) - unification (DRYing) - standardization (use libraries, don't write it yourself)
#1 http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule #2 http://martinfowler.com/articles/workflowsOfRefactoring/
Thanks for your time and I'm looking forward your ideas and opinions Bye Lukas
-- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org