This may be more a point for discussion, rather than a simple question. I am wondering how other people have implemented their test execution workflow. The way of working that we have in mind doesn't seem possible without modifications and I am therefore very curious how you have implemented this.
Our goal is as follows:
Prior to a test execution phase, we want to create test sets which contain all test cases for a specific hardware/software combination. There are multiple sets per phase, and they can (partly) contain the same test cases.
-> How does the tester keep track of which test cases are still to be executed during testing?
-> How does the tester identify which test cases should be re-tested?
What we were planning to do is the following:
- We use the Test Plan item to define to which project milestone different tests belong.
- Different teams test within the same Test Plan. The teams want to be able to distinguish which test cases belong to them and which do not. We will use a Test Objective item for this and to track test progress.
- Below the "team" Test Objective are "test set" Test Objectives
- Tests are linked to the relevant Test Objective(s).
So far, so good. We have now created our test sets.
When test execution starts, the tester selects a bunch of test cases from the Test Objective and creates a Test Session from that selection. More often than not, he/she will not complete all test cases from that Objective in one run, which I would like to have this overview. However, it seems that during test execution it is not possible to use the Test Objective to identify which test cases are still to be done. Also, there is no means to retrieve all the failed test cases.
Another option would be to use the Test Session as a superset of all test cases to be executed, and to create a new test session from that first session. However, then the tester doesn't have a complete overview of which test have not yet been executed.
Again, I am very interested in hearing how you are doing this! Thanks in advance!