[yast-devel] Yast Development after SLE 12
Hi, The current Yast code that we will have in SLE 12 is partly automagically translated from YCP to kind of Ruby (also called RYCP), is still using many YCP-like methods for backward compatibility. Partly - because we have already cleaned up some pieces when touching them, following the Boy-Scout Rule [#1] and so the feel quite OK. Of course, we will not be able to cleanup everything we wanted and when SLE 12 is out, we will have to maintain the codebase for years. What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development. Obviously, result of the RAC has to end up in SLE 12 codebase. I've already started talking with the Product Management (PM) and they are more-or-less aligned with this idea. And of course, I'd love to see the RAC in openSUSE as well. For that reason, it might make sense to develop both in the same branch as long as possible (SLE 12 GA maintenance [+ SP1 development?], openSUSE development). I'd like to improve these interconnected areas: - fewer bugs - better test coverage - better code maintainability in the future There are many open questions that need to be brainstormed before we decide and start planning, to name some: - Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones? - How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users? - As we will have to add testcases for everything we refactor, should we also move code from "clients" and "includes" to libraries to be easier to test? - How and where to run CI tests for all supported branches? - How to measure the success? I'd love to see automatically generated metrics and code coverage. Metrics could help us to identify the most-rotten pieces. - Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step. - Additional step is to run such tests for all supported products since we have enabled users to install updates during installation - this is connected to refactoring only partly, it's essential even if we haven't refactored a single line of code A few buzzwords for the fun :) - automation (don't do manually what you don't need to do) - unification (DRYing) - standardization (use libraries, don't write it yourself) #1 http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule #2 http://martinfowler.com/articles/workflowsOfRefactoring/ Thanks for your time and I'm looking forward your ideas and opinions Bye Lukas -- Lukas Ocilka, Systems Management (Yast) Team Leader Cloud & Systems Management Department, SUSE Linux -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
Hi,
The current Yast code that we will have in SLE 12 is partly automagically translated from YCP to kind of Ruby (also called RYCP), is still using many YCP-like methods for backward compatibility. Partly - because we have already cleaned up some pieces when touching them, following the Boy-Scout Rule [#1] and so the feel quite OK. Of course, we will not be able to cleanup everything we wanted and when SLE 12 is out, we will have to maintain the codebase for years.
What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development. Obviously, result of the RAC has to end up in SLE 12 codebase. I've already started talking with the Product Management (PM) and they are more-or-less aligned with this idea. And of course, I'd love to see the RAC in openSUSE as well. For that reason, it might make sense to develop both in the same branch as long as possible (SLE 12 GA maintenance [+ SP1 development?], openSUSE development).
I think it make sense as long as opensuse and SLE12 do not much diverge in sense of configured stuff. If e.g. parted changed in opensuse, then it is hard to keep same code ( and similar for other critical parts like grub2, systemd, wicked etc. ).
I'd like to improve these interconnected areas:
- fewer bugs - better test coverage - better code maintainability in the future
There are many open questions that need to be brainstormed before we decide and start planning, to name some:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
From what I see in current development it is license handling, slideshow, ui libraries deserve better API that can benefit from ruby. In general the most important is to refactor parts that will be changed ( due to bugs or features ). So for me it make sense to improve parts that we want change, as it speed up developement and decrease number of bugs.
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Depends on situation. I think some parts need just small refactoring. Some need quite heavy rewrite, as it is horrible mess. API is definitivelly autoyast XML, also API is what user can do in UI. Otherwise I think it do not matter.
- As we will have to add testcases for everything we refactor, should we also move code from "clients" and "includes" to libraries to be easier to test?
I think it should not be main reason. Main reason should be to reusability of code. I often found that same things is done on multiple places and often it contain bugs. So for me main reason for libraries is reusability. If you have small libraries, it is easier to reuse it then full blown clients and includes. Beside include create fat models in the end, so I do not like it much as it breaks single responsibility principle. In the end it can end as in Bootloader where each module have over 100 methods from low level stuff from specific includes to high level UI dialogs from different include. For me personally it is much easier to understand if I have e.g. three libraries that one contain dialog, second current configuration and third specific configuration element then one module that mix all of it.
- How and where to run CI tests for all supported branches?
Because we now uses rake osc:build with own chroot, then for SLE-12 and any later branch we can just use two CI we currently have - one external and one internal.
- How to measure the success? I'd love to see automatically generated metrics and code coverage. Metrics could help us to identify the most-rotten pieces.
I played with https://codeclimate.com/ e.g. https://codeclimate.com/github/yast/yast-network or https://codeclimate.com/github/yast/yast-registration I think it is quite nice tool, but for yast usage is not enough sensitive as often you improve a lot file and still it is F :) Another problem is that it is just one metric, so it is not much robust. Option can be use metric_fu to create regular graphs for code quality and it contain various metrics ( each metric is own separated tool), so it works better. We used it e.g. for SLMS - http://docserv.suse.de/prague/slms/measures/ ( it actually still works :) metric_fu - https://github.com/metricfu/metric_fu ( original one looks not active, but this part is quite active and they add also other metrics ). Other nice tool for coding conventions is e.g. rubocop, which is used by scc team, that report coding conventions problems.
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
If we do it before submit to target project it is problematic, as frequency of our changes are quite high, so we have trouble to build DVD and also it take some time to run test. I think better way is to use staging and each failure fix with test unit test. So we prevent regression and fix only new problems ( like changed underlaying stuff ).
- Additional step is to run such tests for all supported products since we have enabled users to install updates during installation - this is connected to refactoring only partly, it's essential even if we haven't refactored a single line of code
for rspec test it is not problem as we run it in chroot in target environment. for others it depends on implementation. Josef
A few buzzwords for the fun :)
- automation (don't do manually what you don't need to do) - unification (DRYing) - standardization (use libraries, don't write it yourself)
#1 http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule #2 http://martinfowler.com/articles/workflowsOfRefactoring/
Thanks for your time and I'm looking forward your ideas and opinions Bye Lukas
-- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Thu, Jun 12, 2014 at 05:02:24PM +0200, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
[Ruby YCP - answered in a separate mail]
What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development. Obviously, result of the RAC has to end up in SLE 12 codebase. I've already started talking with the Product Management (PM) and they are more-or-less aligned with this idea. And of course, I'd love to see the RAC in openSUSE as well. For that reason, it might make sense to develop both in the same branch as long as possible (SLE 12 GA maintenance [+ SP1 development?], openSUSE development).
I think it make sense as long as opensuse and SLE12 do not much diverge in sense of configured stuff. If e.g. parted changed in opensuse, then it is hard to keep same code ( and similar for other critical parts like grub2, systemd, wicked etc. ).
OK. Does it make sense with schedules too? openSUSE 13.2 still in November 2014? Any public comment about SLE12 SP1?
I'd like to improve these interconnected areas:
- fewer bugs - better test coverage - better code maintainability in the future
Do we focus on everything? What are our goals for the refactoring? How do we measure success? Developer happiness matters to me, but it is hard to quantify. I really enjoyed the progress spreadsheet we had for YCP Killer, and I think it is important to find a meaningful metric and see it improving as we work together. So the most important questions below are Which parts to refactor and How to measure success.
There are many open questions that need to be brainstormed before we decide and start planning, to name some:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
From what I see in current development it is license handling, slideshow, ui libraries deserve better API that can benefit from ruby. In general the most important is to refactor parts that will be changed ( due to bugs or features ). So for me it make sense to improve parts that we want change, as it speed up developement and decrease number of bugs.
Some options how to decide WHAT to refactor: 1. Measure code quality everywhere (with metric_fu) and pick the worst code. 2. Measure code changes (metric_fu churn), assuming what has changed in the past will need change in the future. 3. Count bugs. Probably hard on a file level but easy on a package level by counting "bnc" in *.changes. That counts fixed bugs, not reported ones. 4. Count feature requests (done and pending). I think that (1) is wrong, as it is perfectly OK to leave bad code alone as long as it works. For (2) and (3) it should be fairly easy to automate the numbers and I could do that. I don't know about (4).
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Depends on situation. I think some parts need just small refactoring. Some need quite heavy rewrite, as it is horrible mess.
API is definitivelly autoyast XML, also API is what user can do in UI. Otherwise I think it do not matter.
I agree about the AY API. HOW MUCH depends on 2 things I think: 1. The amount of code needing refactoring (see What above) and our team*time resources. 2. Our ability to refactor properly, that is, change code without fear of regressions, that is, having meaningful tests in place for the changed code. (2) means we should put high priority on testing Autoyast. And get more familiar with openqa for the interactive installation tests.
[Testing, Measuring]
EEMAILTOOLONG. To Be Continued! -- Martin Vidner, Cloud & Systems Management Team http://en.opensuse.org/User:Mvidner Kuracke oddeleni v restauraci je jako fekalni oddeleni v bazenu
Dne 13.6.2014 15:16, Martin Vidner napsal(a):
Developer happiness matters to me, but it is hard to quantify. I really enjoyed the progress spreadsheet we had for YCP Killer, and I think it is important to find a meaningful metric and see it improving as we work together. So the most important questions below are Which parts to refactor and How to measure success.
+1 I also enjoyed the YCP Killer project because we had a clear goal (with clear steps) and a metric to measure and evaluate the progress. It was really great to see that at the end of the day that the percentage of the current step was increased. The change was very small (usually just about 2% a day, sometimes more, sometimes less), but it was clear that we were progressing and that we had the right direction towards the target. Anytime during the conversion we exactly knew where we were and how much work was left.
1. Measure code quality everywhere (with metric_fu) and pick the worst code. 2. Measure code changes (metric_fu churn), assuming what has changed in the past will need change in the future. 3. Count bugs. Probably hard on a file level but easy on a package level by counting "bnc" in *.changes. That counts fixed bugs, not reported ones. 4. Count feature requests (done and pending).
I think that (1) is wrong, as it is perfectly OK to leave bad code alone as long as it works.
Yes, and there are some yast packages which are not used much and might be potentially removed in the future. So it's OK to leave them in the current (not perfect) state. -- Best Regards Ladislav Slezák Yast Developer ------------------------------------------------------------------------ SUSE LINUX, s.r.o. e-mail: lslezak@suse.cz Lihovarská 1060/12 tel: +420 284 028 960 190 00 Prague 9 fax: +420 284 028 951 Czech Republic http://www.suse.cz/ -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Fri, 13 Jun 2014 15:16:52 +0200 Martin Vidner <mvidner@suse.cz> wrote:
On Thu, Jun 12, 2014 at 05:02:24PM +0200, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
[Ruby YCP - answered in a separate mail]
What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development. Obviously, result of the RAC has to end up in SLE 12 codebase. I've already started talking with the Product Management (PM) and they are more-or-less aligned with this idea. And of course, I'd love to see the RAC in openSUSE as well. For that reason, it might make sense to develop both in the same branch as long as possible (SLE 12 GA maintenance [+ SP1 development?], openSUSE development).
I think it make sense as long as opensuse and SLE12 do not much diverge in sense of configured stuff. If e.g. parted changed in opensuse, then it is hard to keep same code ( and similar for other critical parts like grub2, systemd, wicked etc. ).
OK. Does it make sense with schedules too? openSUSE 13.2 still in November 2014? Any public comment about SLE12 SP1?
I'd like to improve these interconnected areas:
- fewer bugs - better test coverage - better code maintainability in the future
Do we focus on everything? What are our goals for the refactoring? How do we measure success?
Developer happiness matters to me, but it is hard to quantify. I really enjoyed the progress spreadsheet we had for YCP Killer, and I think it is important to find a meaningful metric and see it improving as we work together. So the most important questions below are Which parts to refactor and How to measure success.
For me success measurement is also how much developer worry that he brake something. It usually is solved by tests and it can be measured - https://github.com/mbj/mutant So for me reasonable metrics on which we should focus for refactoring: - test coverage + mutation testing ( to not scare modify code ) - file size ( noone want to read too long files, hard to search, hard to test, hard to keep all dependencies ) - method size ( hard to understand method that do too much and also hard to change it with keeping all stuff in mind ) - public interface size ( smaller better, provide good API, that do all needed and not more, easier to keep things backward compatible ), in converted code it is number of methods in publish, for new code public methods on classes - http://pdepend.org/documentation/software-metrics/number-of-public-methods.h... In general we can be inspired by already existing metrics, e.g. for object oriented code - http://agile.csc.ncsu.edu/SEMaterials/OOMetrics.htm
There are many open questions that need to be brainstormed before we decide and start planning, to name some:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
From what I see in current development it is license handling, slideshow, ui libraries deserve better API that can benefit from ruby. In general the most important is to refactor parts that will be changed ( due to bugs or features ). So for me it make sense to improve parts that we want change, as it speed up developement and decrease number of bugs.
Some options how to decide WHAT to refactor:
1. Measure code quality everywhere (with metric_fu) and pick the worst code. 2. Measure code changes (metric_fu churn), assuming what has changed in the past will need change in the future. 3. Count bugs. Probably hard on a file level but easy on a package level by counting "bnc" in *.changes. That counts fixed bugs, not reported ones. 4. Count feature requests (done and pending).
I think 2 and 4 make sense, as 3 is usually just subset of 2 and 1 can affect code that we do not want to touch and hope noone use it :) Problem of 2-4 is that it only count final place, but often when I debug code or considering where to implement it I need to study how it works now and it often need to read some ugly code and found, that in the end I need to modify different file used from this ugly code, but I think it is acceptable tradeoff.
I think that (1) is wrong, as it is perfectly OK to leave bad code alone as long as it works. For (2) and (3) it should be fairly easy to automate the numbers and I could do that. I don't know about (4).
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Depends on situation. I think some parts need just small refactoring. Some need quite heavy rewrite, as it is horrible mess.
API is definitivelly autoyast XML, also API is what user can do in UI. Otherwise I think it do not matter.
I agree about the AY API.
HOW MUCH depends on 2 things I think:
1. The amount of code needing refactoring (see What above) and our team*time resources. 2. Our ability to refactor properly, that is, change code without fear of regressions, that is, having meaningful tests in place for the changed code.
sometimes I found that writing tests for old code is so hard, that easier for me is to change code to better testable form, write tests after and then do manual testing that discover some bugs, so improve tests to cover it and then repeat until manual testing approve, that it works. Of course it have higher risk of regressions, but on other hand also it is much more effective.
(2) means we should put high priority on testing Autoyast. And get more familiar with openqa for the interactive installation tests.
Yes, more testing for Autoyast definitivelly help a lot. And it should be quite easy as it is automatic deployment and we have example profiles from our customers. Josef
On Tue, Jun 24, 2014 at 01:48:42PM +0200, Josef Reidinger wrote:
For me success measurement is also how much developer worry that he brake something. It usually is solved by tests and it can be measured - https://github.com/mbj/mutant
So for me reasonable metrics on which we should focus for refactoring:
- test coverage + mutation testing ( to not scare modify code ) - file size ( noone want to read too long files, hard to search, hard to test, hard to keep all dependencies ) - method size ( hard to understand method that do too much and also hard to change it with keeping all stuff in mind ) - public interface size ( smaller better, provide good API, that do all needed and not more, easier to keep things backward compatible ), in converted code it is number of methods in publish, for new code public methods on classes - http://pdepend.org/documentation/software-metrics/number-of-public-methods.h...
In general we can be inspired by already existing metrics, e.g. for object oriented code - http://agile.csc.ncsu.edu/SEMaterials/OOMetrics.htm
Please keep in mind that we have code in several languages, e.g. Ruby, C++, Perl, Bash, Python, and also plain data, e.g. XML. Regards, Arvin -- Arvin Schnell, <aschnell@suse.de> Senior Software Engineer, Research & Development SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Tue, 24 Jun 2014 13:54:19 +0200 Arvin Schnell <aschnell@suse.de> wrote:
On Tue, Jun 24, 2014 at 01:48:42PM +0200, Josef Reidinger wrote:
For me success measurement is also how much developer worry that he brake something. It usually is solved by tests and it can be measured - https://github.com/mbj/mutant
So for me reasonable metrics on which we should focus for refactoring:
- test coverage + mutation testing ( to not scare modify code ) - file size ( noone want to read too long files, hard to search, hard to test, hard to keep all dependencies ) - method size ( hard to understand method that do too much and also hard to change it with keeping all stuff in mind ) - public interface size ( smaller better, provide good API, that do all needed and not more, easier to keep things backward compatible ), in converted code it is number of methods in publish, for new code public methods on classes - http://pdepend.org/documentation/software-metrics/number-of-public-methods.h...
In general we can be inspired by already existing metrics, e.g. for object oriented code - http://agile.csc.ncsu.edu/SEMaterials/OOMetrics.htm
Please keep in mind that we have code in several languages, e.g. Ruby, C++, Perl, Bash, Python, and also plain data, e.g. XML.
Regards, Arvin
Good point, there is two ways how to see it - we are going to unify it to ruby and c++ and of course data in various formats, so in current world, where is two main ways of coding ( functional and object ) make more sense for us to use object one as we need states and also ruby support it quite nicely. - principles of object oriented code can be used in always all languages, it is just question how hard it is. Even gtk in pure C use objects (BTW target map passed to some storage methods also simulate object, just it is not coupled with its methods). I think a lot of principles written above make sense even in not pure object oriented language. Josef -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Tue, Jun 24, 2014 at 01:48:42PM +0200, Josef Reidinger wrote:
On Fri, 13 Jun 2014 15:16:52 +0200 Martin Vidner <mvidner@suse.cz> wrote:
Some options how to decide WHAT to refactor:
1. Measure code quality everywhere (with metric_fu) and pick the worst code. 2. Measure code changes (metric_fu churn), assuming what has changed in the past will need change in the future. 3. Count bugs. Probably hard on a file level but easy on a package level by counting "bnc" in *.changes. That counts fixed bugs, not reported ones. 4. Count feature requests (done and pending).
I think 2 and 4 make sense, as 3 is usually just subset of 2 and 1 can affect code that we do not want to touch and hope noone use it :)
Problem of 2-4 is that it only count final place, but often when I debug code or considering where to implement it I need to study how it works now and it often need to read some ugly code and found, that in the end I need to modify different file used from this ugly code, but I think it is acceptable tradeoff.
What to refactor I have used dirty cheap tricks to produce some initial metrics to help us decide what to refactor, on a package level. Pepa has also suggested other metrics, but they work on file level and are not so cheap. Top 10 results are inlined, complete in attachments. 1) Churn Take the number of version bumps for a package between oS 13.1 and SLE12. Conveniently, they all start at 3.1.0 :) cd ~/github-checkout/yast grep ^Version yast-*/package/*.spec | sort -k2 -V -r yast-installation/package/yast2-installation.spec:Version: 3.1.97 yast-yast2/package/yast2.spec:Version: 3.1.82 yast-registration/package/yast2-registration.spec:Version: 3.1.82 yast-network/package/yast2-network.spec:Version: 3.1.67 yast-bootloader/package/yast2-bootloader.spec:Version: 3.1.61 yast-autoinstallation/package/autoyast2.spec:Version: 3.1.41 yast-storage/package/yast2-storage.spec:Version: 3.1.33 yast-users/package/yast2-users.spec:Version: 3.1.27 yast-packager/package/yast2-packager.spec:Version: 3.1.25 yast-theme/package/yast2-theme.spec:Version: 3.1.21 2) Features Count "FATE" in the changelog. Cheat by going over the whole changelog, not cutting off at oS 13.1. grep --ignore-case --count fate yast*/package/*.changes | sort -t: -k2 -n -r yast-installation/package/yast2-installation.changes:126 yast-network/package/yast2-network.changes:69 yast-yast2/package/yast2.changes:60 yast-storage/package/yast2-storage.changes:60 yast-packager/package/yast2-packager.changes:50 yast-bootloader/package/yast2-bootloader.changes:43 yast-autoinstallation/package/autoyast2.changes:33 yast-pkg-bindings/package/yast2-pkg-bindings.changes:19 yast-users/package/yast2-users.changes:17 yast-registration/package/yast2-registration.changes:14 3) Bugs Analogous to Features grep --ignore-case --count bnc yast*/package/*.changes | sort -t: -k2 -n -r yast-installation/package/yast2-installation.changes:392 yast-network/package/yast2-network.changes:379 yast-yast2/package/yast2.changes:340 yast-storage/package/yast2-storage.changes:338 yast-packager/package/yast2-packager.changes:291 yast-bootloader/package/yast2-bootloader.changes:234 yast-autoinstallation/package/autoyast2.changes:150 yast-registration/package/yast2-registration.changes:128 yast-printer/package/yast2-printer.changes:123 yast-country/package/yast2-country.changes:116 Have you noticed Features and Bugs have the exact same packages in top 7 places? -- Martin Vidner, Cloud & Systems Management Team http://en.opensuse.org/User:Mvidner Kuracke oddeleni v restauraci je jako fekalni oddeleni v bazenu
On Thu, Jul 03, 2014 at 03:52:29PM +0200, Martin Vidner wrote:
What to refactor
I have used dirty cheap tricks to produce some initial metrics to help us decide what to refactor, on a package level. Pepa has also suggested other metrics, but they work on file level and are not so cheap.
Top 10 results are inlined, complete in attachments.
2) Features
3) Bugs
Have you noticed Features and Bugs have the exact same packages in top 7 places?
Nice ;) Did you check how features and bugs correlate to code size? Regards, Arvin -- Arvin Schnell, <aschnell@suse.de> Senior Software Engineer, Research & Development SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Thu, Jun 12, 2014 at 05:02:24PM +0200, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
[...]
- As we will have to add testcases for everything we refactor, should we also move code from "clients" and "includes" to libraries to be easier to test?
I think it should not be main reason. Main reason should be to reusability of code. I often found that same things is done on multiple places and often it contain bugs. So for me main reason for libraries is reusability. If you have small libraries, it is easier to reuse it then full blown clients and includes. Beside include create fat models in the end, so I do not like it much as it breaks single responsibility principle. In the end it can end as in Bootloader where each module have over 100 methods from low level stuff from specific includes to high level UI dialogs from different include.
For me personally it is much easier to understand if I have e.g. three libraries that one contain dialog, second current configuration and third specific configuration element then one module that mix all of it.
We should do whatever helps us write meaningful tests. I think this will probably involve moving code out of clients and includes, but that's a means, not a goal.
- How and where to run CI tests for all supported branches?
Because we now uses rake osc:build with own chroot, then for SLE-12 and any later branch we can just use two CI we currently have - one external and one internal.
Where: on the existing CI servers. How: just add jobs for the branches. Fairly straightforward, I think.
- How to measure the success? I'd love to see automatically generated metrics and code coverage. Metrics could help us to identify the most-rotten pieces.
I played with https://codeclimate.com/ e.g. https://codeclimate.com/github/yast/yast-network or https://codeclimate.com/github/yast/yast-registration
I think it is quite nice tool, but for yast usage is not enough sensitive as often you improve a lot file and still it is F :) Another problem is that it is just one metric, so it is not much robust.
Option can be use metric_fu to create regular graphs for code quality and it contain various metrics ( each metric is own separated tool), so it works better. We used it e.g. for SLMS - http://docserv.suse.de/prague/slms/measures/ ( it actually still works :)
(It does, but it was last run in January, and the X axis labels are so many to be illegible.)
metric_fu - https://github.com/metricfu/metric_fu ( original one looks not active, but this part is quite active and they add also other metrics ).
Other nice tool for coding conventions is e.g. rubocop, which is used by scc team, that report coding conventions problems.
Whatever tool we use, it is important to automate the repors into the CI server and produce an (improving) trend graph. In addition to source code metrics, I guess better code would be reflected in bug and feature counts. We should run the metrics at the start and then set a reasonable number to work toward.
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
If we do it before submit to target project it is problematic, as frequency of our changes are quite high, so we have trouble to build DVD and also it take some time to run test. I think better way is to use staging and each failure fix with test unit test. So we prevent regression and fix only new problems ( like changed underlaying stuff ).
- Additional step is to run such tests for all supported products since we have enabled users to install updates during installation - this is connected to refactoring only partly, it's essential even if we haven't refactored a single line of code
for rspec test it is not problem as we run it in chroot in target environment. for others it depends on implementation.
Yes, we should do integration and regression tests, as you write. I don't see a problem but we will know only when we start. Thanks for starting the discussion! I bugs was starting bugs to see bugs only bugs in the SLE bugs work bugs. -- Martin Vidner, Cloud & Systems Management Team http://en.opensuse.org/User:Mvidner Kuracke oddeleni v restauraci je jako fekalni oddeleni v bazenu
On 06/12/2014 05:02 PM, Josef Reidinger wrote:
On Thu, 12 Jun 2014 16:30:06 +0200 Lukas Ocilka <lukas.ocilka@suse.com> wrote:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones? From what I see in current development it is license handling, slideshow, ui libraries deserve better API that can benefit from ruby.
Something that immediately came to my mind first time I saw Yast UI code was how "un-rubist" it looked when compared to Shoes [1]. Btw, I have just found that there is a "Shoes 4" project [2] to keep Shoes alive. I guess they have improved the API even further. Just my 2 cents. [1] http://shoesrb.com/walkthrough.html [2] https://github.com/shoes/shoes4 -- Ancor González Sosa openSUSE Team at SUSE Linux GmbH -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Thu, Jun 12, 2014 at 04:30:06PM +0200, Lukas Ocilka wrote:
The current Yast code that we will have in SLE 12 is partly automagically translated from YCP to kind of Ruby (also called RYCP), is still using many YCP-like methods for backward compatibility. Partly - because we have already cleaned up some pieces when touching them, following the Boy-Scout Rule [#1] and so the feel quite OK. Of course, we will not be able to cleanup everything we wanted and when SLE 12 is out, we will have to maintain the codebase for years.
I want to prototype a tool to help with converting Rubified YCP to idiomatic Ruby. I want to try different levels: - An "aggressive" conversion, which converts *all* Ops and Builtin calls in a file assuming tests are in place to catch bugs, - A "safe" conversion which leaves functoinality the same even if the dreaded nil sneaks in. Like replacing Builtins.getenv(foo) with ENV[foo]. Or using a simple static analysis to deduce that a variable cannot be nil, like in @help = "" @help = Ops.add(@help, "Part 1 of 42...") @help = Ops.add(@help, "Part 2 of 42...") David sent me a link to http://whitequark.org/blog/2013/04/26/lets-play-with-ruby-code/ which points to a library that was used to convert between RSpec 2 and RSpec 3 (which is a nontrivial task defeating regex hacks)
[rest of the mail] will reply tomorrow -- Martin Vidner, Cloud & Systems Management Team http://en.opensuse.org/User:Mvidner
Kuracke oddeleni v restauraci je jako fekalni oddeleni v bazenu
I want to prototype a tool to help with converting Rubified YCP to idiomatic Ruby.
I personally prefer to do it by hand. Bcs it forces me to do at least a basic tests for touched code (you know, test coverage in network is poor). Moreover, I usually rewrite touched code more deeply. E.g. when it looks like: def method if condition method body end end Michal -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 12.06.2014 18:17, schrieb Martin Vidner:
On Thu, Jun 12, 2014 at 04:30:06PM +0200, Lukas Ocilka wrote:
The current Yast code that we will have in SLE 12 is partly automagically translated from YCP to kind of Ruby (also called RYCP), is still using many YCP-like methods for backward compatibility. Partly - because we have already cleaned up some pieces when touching them, following the Boy-Scout Rule [#1] and so the feel quite OK. Of course, we will not be able to cleanup everything we wanted and when SLE 12 is out, we will have to maintain the codebase for years.
I want to prototype a tool to help with converting Rubified YCP to idiomatic Ruby.
I want to try different levels:
- An "aggressive" conversion, which converts *all* Ops and Builtin calls in a file assuming tests are in place to catch bugs,
- A "safe" conversion which leaves functoinality the same even if the dreaded nil sneaks in. Like replacing Builtins.getenv(foo) with ENV[foo]. Or using a simple static analysis to deduce that a variable cannot be nil, like in @help = "" @help = Ops.add(@help, "Part 1 of 42...") @help = Ops.add(@help, "Part 2 of 42...")
Getting rid of the Ops and Builtins calls (what about Convert?) would make the code much more readable. But I see only chance for "safe" conversion (because of missing tests).
David sent me a link to http://whitequark.org/blog/2013/04/26/lets-play-with-ruby-code/ which points to a library that was used to convert between RSpec 2 and RSpec 3 (which is a nontrivial task defeating regex hacks)
[rest of the mail] will reply tomorrow
- -- Gabriele Mohr SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) Maxfeldstr. 5 Tel: +49 911 740 53 362 90409 Nürnberg Email: gs@suse.de - ----------------------------------------------------------------- -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iD8DBQFTqWTIzwhO63ql6h0RAj1DAJ49yVoiR6+/7WQuieElkA5sTbnS5gCfUokm E6Tpwq4cHn0jRZKIUZqtzaw= =Xg28 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Thu, Jun 12, 2014 at 04:30:06PM +0200, Lukas Ocilka wrote:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
Every module I have seen has code that needs refactoring.
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Very deep, e.g. the target map of storage is a error prone interface.
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
This is the most important point I see. So far adding testcases (which is required!) is often not done because we don't have existing testsuites or changing the code to be testable requires a big rewrite. Regards, Arvin -- Arvin Schnell, <aschnell@suse.de> Senior Software Engineer, Research & Development SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Tue, 24 Jun 2014 11:59:21 +0200 Arvin Schnell <aschnell@suse.de> wrote:
On Thu, Jun 12, 2014 at 04:30:06PM +0200, Lukas Ocilka wrote:
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
Every module I have seen has code that needs refactoring.
Nothing is perfect. I think we can use rule 80:20. Goal is not perfect code, but better code.
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
Very deep, e.g. the target map of storage is a error prone interface.
I absolutelly agree. And it is often used in various code. If it is used only from ruby, we can create quite nice object, which is encapsulated, linked together and have good methods. Then you can have code like disk = Storage.find("UUID=123123123123") biggest_swap = dist.partitions.max_by { |p| p.swap? ? p.size : -1 } or can_boot = Storage.find_by_mount_point("/boot").primary? I know how it looks now using target map, so I think this will be nice improvement and also easier to keep backward compatibility
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
This is the most important point I see. So far adding testcases (which is required!) is often not done because we don't have existing testsuites or changing the code to be testable requires a big rewrite.
Agree. And I hope that even if we change code to be easier to test allow us to improve code quality as code that is easy to test is usually - more separated - less coupled - more isolated - motivate to reuse existing code as it is already tested ;)
Regards, Arvin
Josef -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
Dne 12.6.2014 16:30, Lukas Ocilka napsal(a):
What we will need is the "Refactoring and Cleanup" (RAC) phase, to be a regular part of the development.
Great! That step was always missing in the Yast development.
- Which parts deserve the refactoring (e.g., those that we often touch, those, that are not understandable anymore, buggy, ...)? Which are your most-favorite ones?
It's hard to say, there are many bad modules... But I think we should start in yast2 (the main library), the code there is shared so it should look nice, should be well tested and documented.
- How deep should the refactoring be, we have to keep the current "API", but what should be considered the API as we might be the only users?
When we decide to refactor from the converted RYCP code to clean Ruby we will need to change the API anyway (use objects, follow traditional Ruby naming conventions, etc...). And if we want to reuse the Yast code even more we will need to move away from the old Yast architecture and build clean libraries (available as gems). So I would not be bothered about the API changes, the refactoring will very likely break the existing code anyway if we want to have nice Ruby (!) API.
- As we will have to add testcases for everything we refactor, should we also move code from "clients" and "includes" to libraries to be easier to test?
Yes, that's a good idea to get rid of (RYCP) includes and clients. It's hard to test it and it's a relict of the Yast architecture which not known for regular Ruby developers.
- How and where to run CI tests for all supported branches?
Jenkins can be easily enhanced, after switching to "osc" backend we can easily add another Git branches and OBS build targets. I'd like also to use Travis in addition to Jenkins. I currently use it in the registration module. The main advantage is that it runs the tests _before_ a pull request (branch) is merged. Another advantage is that the code coverage tools can be easily integrated (see "coveralls") so you can also see the code coverage change _before_ merging a PR. And the status is updated on the fly, even after adding new commits to the PR (see e.g. https://github.com/yast/yast-registration/pull/113) So you can easily check whether the code coverage is still OK and you can ask the PR author to add more tests when the code coverage decreases too much. (Or you can specify the minimal code coverage, if it less then the tests fails.) You can actually run almost anything you want at Travis, e.g. suse-connect runs "rubocop" tool which checks the coding style. The build fails if you do not follow the predefined coding standard. However, testing yast code is limited at Travis, the problem is that there is no Yast available so you can only test the code which does not use other Yast modules or you need to mock heavily... I'd like to improve this at the next workshop/hackweek.
- How to measure the success? I'd love to see automatically generated metrics and code coverage. Metrics could help us to identify the most-rotten pieces.
I have added the code coverage and metrics badges to the registration main READE.md, see https://github.com/yast/yast-registration So you can have a quick overview just looking at Github.
- Where and how to run automatic integration tests? Will openQA help? We could build our own installation image the same way we did it for the New Installer, testing this image automatically is just another logical step.
Building an own image is cumbersome and requires quite a lot of maintenance (you list the packages included in the ISO and if a package is renamed or there is new dependency then you need to update it). Maybe there is some Build Service magic for this, but from my experience it's too much work. For NI it was the only way, but I'd prefer using e.g. standard Factory images instead of building our own image whenever possible.
- Additional step is to run such tests for all supported products since we have enabled users to install updates during installation - this is connected to refactoring only partly, it's essential even if we haven't refactored a single line of code
Um, I think this is somthing rather for QA automation, it tests whole distribution, not Yast only... -- Best Regards Ladislav Slezák Yast Developer ------------------------------------------------------------------------ SUSE LINUX, s.r.o. e-mail: lslezak@suse.cz Lihovarská 1060/12 tel: +420 284 028 960 190 00 Prague 9 fax: +420 284 028 951 Czech Republic http://www.suse.cz/ -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
participants (8)
-
Ancor Gonzalez Sosa
-
Arvin Schnell
-
Gabriele Mohr
-
Josef Reidinger
-
Ladislav Slezak
-
Lukas Ocilka
-
Martin Vidner
-
Michal Filka