Hi, I will top post as it is quite long email, but keeping it to have referece. So I think at first we should agreed what is goal of such tests. If it is smoke test, then it should run almost automatic without writting too much ( ideally nothing, just run ncurses check if there is not error and exit ). If goal is to run integration tests, then we already have tools around and question is why we should use it and the most important decide what to use, as having three various integration tests tools is very confusing and time consuming. So e.g. for rear example, smoke test can find problem. Of course if we write integration test it will also detect it, but question is why write new tool for it when we can have such test even in openQA which we currently using? So to sum it up, if we use it as smoke test, then it have to run without modification of target module git repo and run it automatic for given repo ( e.g. take command from desktop file, start it in ncurses, check for error and then abort ). If we want to use it as integration test, then it have to be properly discussed why to use this and not openQA as having multiple tools for same purpose does not look like good idea for me. Josef On Wed, 2 Aug 2017 12:59:19 +0200 Ladislav Slezak <lslezak@suse.cz> wrote:
Hi all,
Short Background -----------------
I fixed a trivial bug in the Rear YaST module [1]. The module did not work at all and crashed with "Internal error" right at the start because some files were missing in the RPM package.
The nasty thing is that the bug was unnoticed for 6(!!) months. And we got a bug report just after the SP3 was released... :-(
Improving it ------------
So I was thinking how to improve the situation, if anybody did a smoke test and just try to run the module the bug would have been found.
That means we would need an integration test. openQA was my first idea but it's hard to maintain and we would get the result too late.
But since we use Docker at Travis and we already build the YaST packages there then it should be easy to install the package and run it.
Proof of Concept ----------------
I have implemented a trivial openQA equivalent using the "tmux" package which is similar to "screen". That means the tests are run using the ncurses UI which is lightweight and should work fast in Travis.
Currently I can dump the screen to the Travis log (see the examples below) check what's on the screen, send keyboard presses and navigate in the workflow.
For the rear module I can verify that the module starts properly, I can even change a value in the configuration, save it and then start the module again to verify it has been saved properly.
This initial run shows that the module is broken [1] and displays the "Internal error" popup as reported in bugzilla. After merging the fix [2] it runs properly and runs the save and load workflow as expected, check the screen dumps in the log [3].
The integration test is implemented in a shell script [4]. It's still just a proof of concept, there are a lot of TODOs, the structure does not look nice, some functions should be moved to a separate shared file, etc...
But it really shows that it could improve the test automation a lot.
Advantages ----------
- Runs at Travis, the tests are run early, even _before_ we actually merge the change
- Uses ncurses UI and simple "grep", no fragile screenshot comparing (It's still not ideal, but should be good enough at this point.)
- You can run the same tests locally using the same Docker image => easy debugging, stable environment (more or less)
- Writing smoke tests is easy even for legacy modules
Disadvantages -------------
- Cannot run full installation, only modules in the installed system
- So AutoYaST cannot be tested as well. On the other hand AY allows system cloning and can apply a profile to the running system so we could at least partly test the AY functionality at Travis. And many modules support interactive "configuration" mode.
- Docker is container based, no HW virtualization provided, that means it cannot test HW related modules.
But we can still run at least simple smoke tests that verify the module starts and displays "No printer detected" or something like that. That still makes sense.
TODO/Ideas ----------
- The common functions should be shared (in yast2-devtools?), add helpers for common testing scenarios.
- A shell script might not be the best tool, maybe rspec or serverspec [5] might be better. On the other hand a simple shell script has less dependencies and could be easily shared with openQA. It could run the same tests to ensure YaST still works when the external dependencies are changed.
- For mocking the external tools in complex scenarios we could use shellmock [6].
What do you think about it? Any ideas, comments? Thank you in advance!
[1] https://travis-ci.org/yast/yast-rear/builds/259803414#L681 [2] https://github.com/yast/yast-rear/pull/7 [3] https://travis-ci.org/yast/yast-rear/builds/260086828#L670 [4] https://github.com/yast/yast-rear/pull/8 [5] http://serverspec.org/ [6] https://github.com/gooddata/shellmock
-- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org