Academia.eduAcademia.edu

Outline

Feasible test path selection by principal slicing

1997, Lecture Notes in Computer Science

Abstract

We propose to improve current path-wise methods for automatic test data generation by using a new method named principal slicing. This method statically derives program slices with a near minimum number of influencing predicates, using both control and data flow information. Paths derived on principal slices to reach a certain program point are therefore very likely to be feasible. We discuss how our method improves on earlier proposed approaches, both static and dynamic. We also provide an algorithm for deriving principal slices. Then we illustrate the application of principal slicing to testing, considering a specific test criterion as an example, namely branch coverage. The example provided is an optimised method for automated branch testing: not only do we use principal slicing to obtain feasible test paths, but also we use the concept of spanning sets of branches to guide the selection of each next path, which prevents the generation of redundant tests.

References (21)

  1. H. Agrawal, J.R. Horgan, E.W. Krauser and S.A. London. Incremental regression testing, Proc. of the 1993 IEEE Conf. on Software Maintenance, Montreal, Canada, 348-357 (1993)
  2. B. Beizer. Softvrare Testing Techniques, Second Edition. Van Nostrand Reinhold, New York. 1990.
  3. A. Bertolino and M. Mar&Automatic generation of path covers based on the con- trol flow analysis of computer programs. IEEE nuns. on Software Eng., 20(12):885- 899, (1994).
  4. L. A. Clarke. A system to generate test data and simbolically execute programs. IEEE Trans. on Software Eng., 2(3):215-222, (1976).
  5. R. Conradi, Experience with FORTRAN VERIFIER -A tool for documentation an error diagnosis of FORTRAN-77 programs Proc. 1st European Software Engineering Conference, Strasburg, France 8-11 Sept. 263-275 Springer Verlag LNCS 289 (1987)
  6. R. A. DeMillo and A.J. Offitt. Constraint-based automatic test generation, IEEE Trans. on Softvare Eng., 17(9):900-910, (1991).
  7. R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Trans. on Software Eng. and Meth., 5(1):63-86, (1996)
  8. S. Horwitz, T. Reps and D. Binkley. Interprocedural slicing using dependence graphs. ACM Trans. Progr. Lang. Syst., 12(1):26-61, (1990).
  9. W. E. Howden. Symbolic testing and the DISSECT symbolic evaluation system, IEEE Trans. on Software Eng., 3(4):266-278, (1977).
  10. M. Kamkar. An overview and comparative classification of program slicing tech- niques. Journal of Systems and Software, 31(3):197-214, (1995)
  11. B. Korel. Automated software test data generation. IEEE nuns. on Soflware Eng., 16(8):870-879, (1990).
  12. B. Korel. Dynamic method for software test data generation. J. Soflw, Testing Verif. $eZiab. 2(4):203-213, (1992)
  13. B. Korel and J. La&i. Dynamic slicing of computer programs. Journal of Systems and Software, 13(3):187-195, (1990)
  14. M. Marre and A. Bertolino. Reducing and estimating the cost of test covernga criteria. In Proc. ACM/IEEE Int. Conf. Software Eng. ICSE-18, pages 486-494, Berlin, Germany, March 1996.
  15. K.J. Ottenstein and L.M. Ottenstein. The program dependence graph in a softwara development environment. ACM SIGPLAN Notices, 19(5):177-184, (1984)
  16. G.A. Venkatesh. The semantic approach to program slicing Proc. of lhe ACM SIGPLAN'SI Conference on Programming Language Design and Implemenlalion Toronto, Canada, 107-119 (1991)
  17. M. Weiser. Program slicing. IEEE pans. on Software Eng., 10(4):352-357, (1984),
  18. E. Weyuker. Translatability and decidability questions for restricted classes of program schemas. SIAM Journal on Computers, 8(4):587-598, (1979)
  19. L. J. White and E. I. Cohen. A domain strategy for computer program testing, IEEE Trans. on Software Eng., 6(3):247-257, (1980).
  20. D. F. Yates and N. Malevris. Reducing the effects of infeasible paths in branch testing. ACM SIGSOFT Software Engineering Notes, 14(8):48-54, (1989).
  21. D. F. Yates and N. Malevris. The effort required by LCSAJ testing: an assessment via a new path generation strategy. Software Quality J., 4(3):227-242, (1995).