Comprehending the degree to which software components support testing is important to accurately schedule testing activities,train developers, and plan effective refactoring actions. Softwaretestability estimates such property by relating code characteristicsto the test effort. The main studies of testability reported in theliterature investigate the relation between class metrics and testeffort in terms of the size and complexity of the associated testsuites. They report a moderate correlation of some class metrics totest-effort metrics, but suffer from two main limitations: (i) the results hardly generalize due to the small empirical evidence (datasetswith no more than eight software projects); and (ii) mostly ignorethe quality of the tests. However, considering the quality of thetests is important. Indeed, a class may have a low test effort becausethe associated tests are of poor quality, and not because the classis easier to test. In this paper, we propose an approach to measuretestability that normalizes the test effort with respect to the testquality, which we quantify in terms of code coverage and mutationscore. We present the results of a set of experiments on a datasetof 9,861 Java classes, belonging to 1,186 open source projects, witharound 1.5 million of lines of code overall. The results confirm thatnormalizing the test effort with respect to the test quality largelyimproves the correlation between class metrics and the test effort.Better correlations result in better prediction power and thus betterprediction of the test effort.
Terragni, V., Salza, P., Pezze', M. (2020). Measuring software testability modulo test quality. In ICPC '20: Proceedings of the 28th International Conference on Program Comprehension (pp.241-251). IEEE Computer Society [10.1145/3387904.3389273].
Measuring software testability modulo test quality
Pezze' M.
2020
Abstract
Comprehending the degree to which software components support testing is important to accurately schedule testing activities,train developers, and plan effective refactoring actions. Softwaretestability estimates such property by relating code characteristicsto the test effort. The main studies of testability reported in theliterature investigate the relation between class metrics and testeffort in terms of the size and complexity of the associated testsuites. They report a moderate correlation of some class metrics totest-effort metrics, but suffer from two main limitations: (i) the results hardly generalize due to the small empirical evidence (datasetswith no more than eight software projects); and (ii) mostly ignorethe quality of the tests. However, considering the quality of thetests is important. Indeed, a class may have a low test effort becausethe associated tests are of poor quality, and not because the classis easier to test. In this paper, we propose an approach to measuretestability that normalizes the test effort with respect to the testquality, which we quantify in terms of code coverage and mutationscore. We present the results of a set of experiments on a datasetof 9,861 Java classes, belonging to 1,186 open source projects, witharound 1.5 million of lines of code overall. The results confirm thatnormalizing the test effort with respect to the test quality largelyimproves the correlation between class metrics and the test effort.Better correlations result in better prediction power and thus betterprediction of the test effort.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.