diff options
author | Richard Purdie <richard.purdie@linuxfoundation.org> | 2019-02-16 18:13:00 +0000 |
---|---|---|
committer | Richard Purdie <richard.purdie@linuxfoundation.org> | 2019-02-21 12:31:50 +0000 |
commit | ff2c029b568f70aa9960dde04ddd207829812ea0 (patch) | |
tree | 5d80afe2e19d699d58cb424fe8dd97b3294f47f7 /scripts/lib/resulttool/template/test_report_full_text.txt | |
parent | f24dc9e87085a8fe5410feee10c7a3591fe9d816 (diff) | |
download | openembedded-core-contrib-ff2c029b568f70aa9960dde04ddd207829812ea0.tar.gz |
resulttool: Improvements to allow integration to the autobuilder
This is a combined patch of the various tweaks and improvements I
made to resulttool:
* Avoid subprocess.run() as its a python 3.6 feature and we
have autobuilder workers with 3.5.
* Avoid python keywords as variable names
* Simplify dict accesses using .get()
* Rename resultsutils -> resultutils to match the resultstool ->
resulttool rename
* Formalised the handling of "file_name" to "TESTSERIES" which the code
will now add into the json configuration data if its not present, based
on the directory name.
* When we don't have failed test cases, print something saying so
instead of an empty table
* Tweak the table headers in the report to be more readable (reference
"Test Series" instead if file_id and ID instead of results_id)
* Improve/simplify the max string length handling
* Merge the counts and percentage data into one table in the report
since printing two reports of the same data confuses the user
* Removed the confusing header in the regression report
* Show matches, then regressions, then unmatched runs in the regression
report, also remove chatting unneeded output
* Try harder to "pair" up matching configurations to reduce noise in
the regressions report
* Abstracted the "mapping" table concept used to pairing in the
regression code to general code in resultutils
* Created multiple mappings for results analysis, results storage and
'flattening' results data in a merge
* Simplify the merge command to take a source and a destination,
letting the destination be a directory or a file, removing the need for
an output directory parameter
* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression
mappings
* Have the store command place the testresults files in a layout from
the mapping, making commits into the git repo for results storage more
useful for simple comparison purposes
* Set the oe-git-archive tag format appropriately for oeqa results
storage (and simplify the commit messages closer to their defaults)
* Fix oe-git-archive to use the commit/branch data from the results file
* Cleaned up the command option help to match other changes
* Follow the model of git branch/tag processing used by oe-build-perf-report
and use that to read the data using git show to avoid branch change
* Add ptest summary to the report command
* Update the tests to match the above changes
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'scripts/lib/resulttool/template/test_report_full_text.txt')
-rw-r--r-- | scripts/lib/resulttool/template/test_report_full_text.txt | 33 |
1 files changed, 21 insertions, 12 deletions
diff --git a/scripts/lib/resulttool/template/test_report_full_text.txt b/scripts/lib/resulttool/template/test_report_full_text.txt index bc4874ba4b..5081594cf2 100644 --- a/scripts/lib/resulttool/template/test_report_full_text.txt +++ b/scripts/lib/resulttool/template/test_report_full_text.txt @@ -1,35 +1,44 @@ ============================================================================================================== -Test Report (Count of passed, failed, skipped group by file_dir, result_id) +Test Result Status Summary (Counts/Percentages sorted by testseries, ID) ============================================================================================================== -------------------------------------------------------------------------------------------------------------- -{{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed'.ljust(10) }} | {{ 'failed'.ljust(10) }} | {{ 'skipped'.ljust(10) }} +{{ 'Test Series'.ljust(maxlen['testseries']) }} | {{ 'ID'.ljust(maxlen['result_id']) }} | {{ 'Passed'.ljust(maxlen['passed']) }} | {{ 'Failed'.ljust(maxlen['failed']) }} | {{ 'Skipped'.ljust(maxlen['skipped']) }} -------------------------------------------------------------------------------------------------------------- -{% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %} -{{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }} +{% for report in reportvalues |sort(attribute='sort') %} +{{ report.testseries.ljust(maxlen['testseries']) }} | {{ report.result_id.ljust(maxlen['result_id']) }} | {{ (report.passed|string).ljust(maxlen['passed']) }} | {{ (report.failed|string).ljust(maxlen['failed']) }} | {{ (report.skipped|string).ljust(maxlen['skipped']) }} {% endfor %} -------------------------------------------------------------------------------------------------------------- +{% if haveptest %} ============================================================================================================== -Test Report (Percent of passed, failed, skipped group by file_dir, result_id) +PTest Result Summary ============================================================================================================== -------------------------------------------------------------------------------------------------------------- -{{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed_%'.ljust(10) }} | {{ 'failed_%'.ljust(10) }} | {{ 'skipped_%'.ljust(10) }} +{{ 'Recipe'.ljust(maxlen['ptest']) }} | {{ 'Passed'.ljust(maxlen['passed']) }} | {{ 'Failed'.ljust(maxlen['failed']) }} | {{ 'Skipped'.ljust(maxlen['skipped']) }} | {{ 'Time(s)'.ljust(10) }} -------------------------------------------------------------------------------------------------------------- -{% for report in test_percent_reports |sort(attribute='test_file_dir_result_id') %} -{{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }} +{% for ptest in ptests %} +{{ ptest.ljust(maxlen['ptest']) }} | {{ (ptests[ptest]['passed']|string).ljust(maxlen['passed']) }} | {{ (ptests[ptest]['failed']|string).ljust(maxlen['failed']) }} | {{ (ptests[ptest]['skipped']|string).ljust(maxlen['skipped']) }} | {{ (ptests[ptest]['duration']|string) }} {% endfor %} -------------------------------------------------------------------------------------------------------------- +{% else %} +There was no ptest data +{% endif %} + ============================================================================================================== -Test Report (Failed test cases group by file_dir, result_id) +Failed test cases (sorted by testseries, ID) ============================================================================================================== +{% if havefailed %} -------------------------------------------------------------------------------------------------------------- -{% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %} +{% for report in reportvalues |sort(attribute='sort') %} {% if report.failed_testcases %} -file_dir | result_id : {{ report.file_dir }} | {{ report.result_id }} +testseries | result_id : {{ report.testseries }} | {{ report.result_id }} {% for testcase in report.failed_testcases %} {{ testcase }} {% endfor %} {% endif %} {% endfor %} ---------------------------------------------------------------------------------------------------------------
\ No newline at end of file +-------------------------------------------------------------------------------------------------------------- +{% else %} +There were no test failures +{% endif %} |