Academia.eduAcademia.edu

Outline

Using GUI Run-Time State as Feedback to Generate Test Cases

2007, 29th International Conference on Software Engineering (ICSE'07)

https://doi.org/10.1109/ICSE.2007.94

Abstract

This paper presents a new automated model-driven technique to generate test cases by using feedback from the execution of a "seed test suite" on an application under test (AUT). The test cases in the seed suite are designed to be generated automatically and executed very quickly. During their execution, feedback obtained from the AUT's run-time state is used to generate new, "improved" test cases. The new test cases subsequently become part of the seed suite. This "anytime technique" continues iteratively, generating and executing additional test cases until resources are exhausted or testing goals have been met. The feedback-based technique is demonstrated for automated testing of graphical user interfaces (GUIs). An existing abstract model of the GUI is used to automatically generate the seed test suite. It is executed; during its execution, state changes in the GUI pinpoint important relationships between GUI events, which evolve the model and help to generate new test cases. Together with a reverseengineering algorithm used to obtain the initial model and seed suite, the feedback-based technique yields a fully automatic, end-to-end GUI testing process. A feasibility study on four large fielded open-source software (OSS) applications demonstrates that this process is able to significantly improve existing techniques and help identify/report serious problems in the OSS. In response, these problems have been fixed by the developers of the OSS in subsequent versions.

References (21)

  1. F. Belli. Finite-state testing and analysis of GUIs. In Pro- ceedings of the 12th International Symposium on Software Reliability Engineering, pages 34-43, 2001.
  2. C. Boyapati, S. Khurshid, and D. Marinov. Korat: automated testing based on java predicates. In ISSTA '02: Proceed- ings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis, pages 123-133, 2002.
  3. M. d'Amorim, C. Pacheco, T. Xie, D. Marinov, and M. D. Ernst. An empirical comparison of automated generation and classification techniques for object-oriented unit testing. In Proceedings of the 21st IEEE/ACM International Confer- ence on Automated Software Engineering, 2006.
  4. E. Dustin, J. Rashka, and J. Paul. Automated software test- ing: introduction, management, and performance. 1999.
  5. R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Trans. Softw. Eng. Methodol., 5(1):63-86, 1996.
  6. M. J. Gallagher and V. L. Narasimhan. Adtest: A test data generation suite for ada software systems. IEEE Trans. Soft- ware Eng., 23(8):473-484, 1997.
  7. N. Gupta, A. P. Mathur, and M. L. Soffa. Automated test data generation using an iterative relaxation method. In SIG- SOFT FSE, pages 231-244, 1998.
  8. D. J. Kasik and H. G. George. Toward automatic generation of novice user test scripts. In CHI, pages 244-251, 1996.
  9. B. Korel. Automated software test data generation. IEEE Trans. Software Eng., 16(8):870-879, 1990.
  10. A. M. Memon, A. Nagarajan, and Q. Xie. Automating re- gression testing for evolving GUI software. Journal of Soft- ware Maintenance and Evolution, 17(1):27-64, Jan. 2005.
  11. A. M. Memon, M. E. Pollack, and M. L. Soffa. Hierarchical GUI test case generation using automated planning. IEEE Trans. Software Eng., 27(2):144-155, 2001.
  12. A. M. Memon and Q. Xie. Studying the fault-detection ef- fectiveness of GUI test cases for rapidly evolving software. IEEE Trans. Softw. Eng., 31(10):884-896, 2005.
  13. C. C. Michael, G. McGraw, and M. Schatz. Generating soft- ware test data by evolution. IEEE Trans. Software Eng., 27(12):1085-1110, 2001.
  14. W. Miller and D. L. Spooner. Automatic generation of floating-point test data. IEEE Trans. Software Eng., 2(3):223-226, 1976.
  15. B. A. Myers and M. B. Rosson. Survey on user interface programming. In CHI, pages 195-202, 1992.
  16. A. Rountev, S. Kagan, and M. Gibas. Evaluating the im- precision of static analysis. In Proceedings of the ACM- SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, pages 14-16, 2004.
  17. R. K. Shehady and D. P. Siewiorek. A method to automate user interface testing using variable finite state machines. In Proceedings of the 27th International Symposium on Fault- Tolerant Computing, page 80, 1997.
  18. L. White and H. Almezen. Generating test cases for GUI re- sponsibilities using complete interaction sequences. In Pro- ceedings of the 11th International Symposium on Software Reliability Engineering, page 110, 2000.
  19. Q. Xie and A. M. Memon. Automated model-based test- ing of community-driven open source GUI applications. In Proceedings of the 22nd IEEE International Conference on Software Maintenance, 2006.
  20. T. Xie and D. Notkin. Tool-assisted unit-test generation and selection based on operational abstractions. Autom. Softw. Eng., 13(3):345-371, 2006.
  21. C. Yilmaz, M. B. Cohen, and A. Porter. Covering arrays for efficient fault characterization in complex configuration spaces. In Proceedings of the 2004 international symposium on Software testing and analysis, pages 45-54, 2004.