-
Notifications
You must be signed in to change notification settings - Fork 144
Edits for #7417 - Detailed test status is not available in the Boost.Test log (status, assertions, passed) and so live test case status cannot be tracked #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
To make this pull request easier to understand I'm adding an expanded description here from the original trac ticket. Currently with Boost.Test it is not possible to use the test output to show detailed live test progress. By detailed I mean, test status, number of passed assertions and total number of assertions. This is because:
The attached patch adds additional test status information to the information sent to This allows arbitrary test tools to be developed that interpret the test output for live test progress reporting. One such tool is cuppa which reads this additional info and prints live test progress to the console. Example output for a test suite with two test cases (one with assertions and one without) is: Compare this with the current output, which has no test result details: These changes have no impact on existing tests and simply provide additional information that is already known, but not displayed. |
|
This change needs to be discussed. There are number of things to consider. Here some thoughts on the subject:
|
|
Hi Gennadiy, Thanks for looking at this. Let me try to address your points. All this information is already available in the test reports that we receive at the end. We do not have this available while tests are being processed. XML is not suitable for processing on the fly while tests are being run. The HRF output provides this opportunity. This use-case is particularly important for long running tests. That 1 out of 2,000,000 assertions failed as opposed to 1,900,000 assertions failed is important to our developers as much as if it just passed or failed. I see two concerns here: First making the output generally useful on the fly to direct users. Pretty and coloured output has value here. I don't have any opinions on that as we always use tools to interpret the output. Either inside an IDE like Codeblocks, or using something like Scons via cuppa. XML cannot be used in those cases (and not on the fly besides anyway). It is this second case interests me the most and the subject of my pull request. The second case is focused on providing all test information on completion as a single event that can be interpreted by human or script (we use simple regexes). Another very important use case for this is when running an application in diagnostic mode and making use of boost test output (and test tools in the code). That's the subject of my final pull request #17. I believe I have adopted a minimalistic approach to this by adding a "test status" section to the `Leaving test ". More less the basic format is: In my mind this information should have been then in the first place, however I accept that some may find that verbose, as indeed some may find the Boost.Timer output verbose. If there is a firm belief that it is too verbose then my preference here would be to allow the suppression of the information through a runtime option (or conversely make it available by specifying an option so have it not shown by default). For example To get the output that has just enough info to track the tests, with the patch applied I specify this on the command line: so adding an extra Having said that I'd be interested in how you might make the test status output less verbose without losing information. I am not sure there is much that can be removed. I need the information there and wanted to keep it somewhat human readable and so retained the basic phrasing such as To summarise then - I want the information for processing on the fly. I see that as a separate effort to overhauling and prettifying the HRF output. I have a similar issue with cuppa. In that case I allow both 'raw' output coloured, prettified output. When being used from within an IDE raw output is desired so the IDE can regex match against the output. Fyi the attached image shows some sample output from cuppa that is making use of the additional information in the HRF output. Colour aside you can see the additional assertion and test case counts. With test failures you get something similar but with some red. To be clear though this is not about creating pretty output - this is about processing and tracking test progress on the fly. It so happens we also prettify it. |
|
I still believe that human and automated runner need two different formats. As a side note: I am against introduction of some obscure CLAs targeting Gennadiy On Mon, Jan 12, 2015 at 5:53 PM, ja11sop notifications@github.com wrote:
Gennadiy Rozental |
|
(Sorry for the slow reply, pretty busy atm) My requirements for such a format would be that it would be easily to handle with unix-like command-line processing tools and say work fine with regexes and so on. Given the likely tools I would really not want to have XML as the format, but I think line delimited JSON would work well. It offers the same capabilities but with much easier handling. JSON is also pretty common these days and would retain a degree of human readability. Some even use it in logs. To explore how this could work (and see if I can easily update my toolchain to handle it) I'll update my pull-request to use a new format as you've suggested and try JSON as the format. I'll do some experimentation first and if I envisage any issues I'll ask for advice. I think you are right to want to use some data specific formatting as opposed to relying on carefully formatted text. It is more extensible and easier to manage. Jamie |
|
XML also machine readable format. We want to emphasise the dynamic nature On xml vs. json I am not big fan of later, but I can see the attraction. I
|
|
Agreed, but quite apart from that I believe XML to be a poor choice for this. It doesn't work well with command-line tools, scripting or regex-like searches. It also usually requires heavyweight tools to handle it and is brittle to parse. |
|
Gennadiy - my apologies, the message that appears in github is missing your last remark (which I just saw in the email), namely:
I could live with that. |
|
Please see my last comments in https://svn.boost.org/trac/boost/ticket/7417 : we can now add programmatically a dedicated logger along with the other loggers. This will let you develop the logger you want and include it in your tests. We also implemented a JUnit logger which is well supported by a large set of tools. All in all, I am in favor of closing this PR and if needed consider a new one. |

Pull request for https://svn.boost.org/trac/boost/ticket/7417. Detailed test status is not available in the Boost.Test log (status, assertions, passed) and so live test case status cannot be tracked, or processed by other tools such as https://github.com/ja11sop/cuppa. Please see the referenced ticket #7417 for full details.