I have successfully installed this update on a Fedora 38 laptop. I have generated multiple HTML reports using that. I'm happy that with this update I'm now able to generate reports from my Automatus runs which greatly helps debugging various issues of a content developer. The update is generally functional. I have reported a new small issue here https://github.com/OpenSCAP/openscap-report/issues/212. Thanks for the great job!
I have installed this update on my Fedora 38 laptop and I have performed a scan using the xccdf_org.ssgproject.content_profile_cusp_fedora profile from SSG and then I generated the report using oscap-report. Everything went smoothly. The generated report looks sane. I like the new improvements very much. Keep up this great work!
I was able to generate an HTML report from my local scan. But, with some more advanced content, I have discovered an issue that I reported in https://github.com/OpenSCAP/openscap-report/issues/149. But overall the update seems to be generally functional.
I was able to successfully install and run basic use cases on a F35 Workstation. Everything was OK.
The rpminspect's "filesize" fail is a false positive because the all the VERIFY results are in files generated by Doxygen which is porbably caused by different Doxygen version.
The fail of the tests.openscap/Sanity/oscap-builds-ssg is caused by broken utils/rule_dir_json.py, which is a part of SSG, not OpenSCAP. OpenSCAP works normally here.
In tests.openscap/Sanity/smoke-test there is a problem that we can't see any meaningful output in the test results of CTest. The CTest was pass during upstream CI, but will need to investigate this downstream fail.
Hi @mattf, thank you very much for your great feedback. I was thinking that adding "Epoch: 1" would solve the problem. Would it be OK to revert the commit that adds "Epoch: 1"? Is there a way how can I test the package for these kinds of issues before I submit a new build? Eg. can I reproduce this situation with a scratch build?