TY - GEN
T1 - Revisiting Test Smells in Automatically Generated Tests: Limitations, Pitfalls, and Opportunities
AU - Panichella, A.
AU - Panichella, Sebastiano
AU - Fraser, Gordon
AU - Sawant, Anand Ashok
AU - Hellendoorn, Vincent J.
N1 - Virtual/online event due to COVID-19
PY - 2020
Y1 - 2020
N2 - Test smells attempt to capture design issues in test code that reduce their maintainability. Previous work found such smells to be highly common in automatically generated test-cases, but based this result on specific static detection rules; although these are based on the original definition of “test smells”, a recent empirical study showed that developers perceive these as overly strict and non-representative of the maintainability and quality of test suites. This leads us to investigate how effective such test smell detection tools are on automatically generated test suites. In this paper, we build a dataset of 2,340 test cases automatically generated by EVOSUITE for 100 Java classes. We performed a multi-stage, cross-validated manual analysis to identify six types of test smells and label their instances. We benchmark the performance of two test smell detection tools: one widely used in prior work, and one recently introduced with the express goal to match developer perceptions of test smells. Our results show that these test smell detection strategies poorly characterized the issues in automatically generated test suites; the older tool’s detection strategies, especially, misclassified over 70% of test smells, both missing real instances (false negatives) and marking many smell-free tests as smelly (false positives). We identify common patterns in these tests that can be used to improve the tools, refine and update the definition of certain test smells, and highlight as of yet uncharacterized issues. Our findings suggest the need for (i) more appropriate metrics to match development practice; and (ii) more accurate detection strategies, to be evaluated primarily in industrial contexts.
AB - Test smells attempt to capture design issues in test code that reduce their maintainability. Previous work found such smells to be highly common in automatically generated test-cases, but based this result on specific static detection rules; although these are based on the original definition of “test smells”, a recent empirical study showed that developers perceive these as overly strict and non-representative of the maintainability and quality of test suites. This leads us to investigate how effective such test smell detection tools are on automatically generated test suites. In this paper, we build a dataset of 2,340 test cases automatically generated by EVOSUITE for 100 Java classes. We performed a multi-stage, cross-validated manual analysis to identify six types of test smells and label their instances. We benchmark the performance of two test smell detection tools: one widely used in prior work, and one recently introduced with the express goal to match developer perceptions of test smells. Our results show that these test smell detection strategies poorly characterized the issues in automatically generated test suites; the older tool’s detection strategies, especially, misclassified over 70% of test smells, both missing real instances (false negatives) and marking many smell-free tests as smelly (false positives). We identify common patterns in these tests that can be used to improve the tools, refine and update the definition of certain test smells, and highlight as of yet uncharacterized issues. Our findings suggest the need for (i) more appropriate metrics to match development practice; and (ii) more accurate detection strategies, to be evaluated primarily in industrial contexts.
KW - Software Quality
KW - Test Generation
KW - Test Smells
UR - http://www.scopus.com/inward/record.url?scp=85096693628&partnerID=8YFLogxK
U2 - 10.1109/ICSME46990.2020.00056
DO - 10.1109/ICSME46990.2020.00056
M3 - Conference contribution
SN - 978-1-7281-5620-0
T3 - Proceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020
SP - 523
EP - 533
BT - Proceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020
PB - IEEE
CY - Adelaide, Australia, Australia
T2 - ICSME 2020: International Conference on Software Maintenance and Evolution
Y2 - 28 September 2020 through 2 October 2020
ER -