As test generation and execution is fully automated, model based testing results in large amount of test results. Analyzing these test results remains a manual activity; the expert must conclude whether and how the system under test is incorrect. This can require a significant manual effort.
Similar to test case generation, we would like to reduce the manual activity in the testing process by having the computer aid in test analysis. For example, by grouping similar failures or finding patterns in failed test cases. This way, the expert can limit his attention to only the relevant results.
This topic can be taken into various directions. We expect students to explore a single approach that looks promising and evaluate it in a case study.
Possible research questions:
- How should we systematically interpret mode-based testing results, preferable based on a mathematical foundation?
- How can we automate analysis of test results to diagnose the erroneous system?
- Can we simplify/reduce the test results with additional knowledge of the model?
- Can we reduce/shrink test cases via techniques like Delta debugging?
- How can we automatically relate test results to customer system requirements?
Expected deliverables:
- A literature study about the state of the art of test analysis automation
- A proof of concept of the selected approach
- A case study that evaluates the selected approach using the proof of concept