Academia.eduAcademia.edu

Outline

Toward static test flakiness prediction: a feasibility study

Proceedings of the 5th International Workshop on Machine Learning Techniques for Software Quality Evolution

https://doi.org/10.1145/3472674.3473981

Abstract

Flaky tests are tests that exhibit both a passing and failing behavior when run against the same code. While researchers attempted to define approaches for detecting and addressing test flakiness, most of them suffer from scalability issues. This limitation has been recently targeted through machine learning solutions that could predict the flakiness of tests using a set of both static and dynamic metrics that would avoid the re-execution of tests. Recognizing the effort spent so far, this paper poses the first steps toward an orthogonal view of the problem, namely the classification of flaky tests using only statically computable software metrics. We propose a feasibility study on 72 projects of the iDFlakies dataset, and investigate the differences between flaky and non-flaky tests in terms of 25 test and production code metrics and smells. First, we statistically assess those differences. Second, we build a logistic regression model to verify if the differences observed are still significant when the metrics are considered together. The results show a relation between test flakiness and a number of test and production code factors, indicating the possibility to build classification approaches that exploit those factors to predict test flakiness. CCS CONCEPTS • Software and its engineering → Software testing and debugging; Empirical software validation.