|||"A Systematic Reuse Process for Automated Acceptance Tests: Construction and Elementary Evaluation", In e-Informatica Software Engineering Journal, vol. 15, no. 1, pp. 133–162, 2021.
DOI: , 10.37190/e-Inf210107.|
Get article (PDF)View article entry (BibTeX)
Mohsin Irshad, Kai Petersen
Context: Automated acceptance testing validates a product’s functionality from the customer’s perspective. Text-based automated acceptance tests (AATs) have gained popularity because they link requirements and testing.
Objective: To propose and evaluate a cost-effective systematic reuse process for automated acceptance tests.
Method: A systematic approach, method engineering, is used to construct a systematic reuse process for automated acceptance tests. The techniques to support searching, assessing, adapting the reusable tests are proposed and evaluated. The constructed process is evaluated using (i) qualitative feedback from software practitioners and (ii) a demonstration of the process in an industry setting. The process was evaluated for three constraints: performance expectancy, effort expectancy, and facilitating conditions.
Results: The process consists of eleven activities that support development for reuse, development with reuse, and assessment of the costs and benefits of reuse. During the evaluation, practitioners found the process a useful method to support reuse. In the industrial demonstration, it was noted that the activities in the solution helped in developing an automated acceptance test with reuse faster than creating a test from scratch i.e., searching, assessment and adaptation parts.
Conclusion: The process is found to be useful and relevant to the industry during the preliminary investigation.
Software components and reuse, software testing, analysis and verification, agile software development methodologies and practices, software quality
1. M.J. Harrold, “Testing: A roadmap,” in Proceedings of the Conference on the Future of Software Engineering , 2000, pp. 61–72.
2. W.E. Wong, J.R. Horgan, S. London, and H. Agrawal, “A study of effective regression testing in practice,” in Proceedings., The Eighth International Symposium on Software Reliability Engineering . IEEE, 1997, pp. 264–274.
3. G. Melnik and F. Maurer, “Multiple perspectives on executable acceptance test-driven development,” in International Conference on Extreme Programming and Agile Processes in Software Engineering . Springer, 2007, pp. 245–249.
4. “Standard glossary of terms used in software testing,” International Software Testing Qualifications Board, Standard 3.5, 2020. [Online]. https://www.istqb.org/downloads/glossary.html
5. B. Haugset and G.K. Hanssen, “Automated acceptance testing: A literature review and an industrial case study,” in Agile Conference . IEEE, 2008, pp. 27–38.
6. M. Huo, J. Verner, L. Zhu, and M.A. Babar, “Software quality and agile methods,” in Proceedings of the 28th Annual International Computer Software and Applications Conference, 2004. COMPSAC 2004. IEEE, 2004, pp. 520–525.
7. J. Weiss, A. Schill, I. Richter, and P. Mandl, “Literature review of empirical research studies within the domain of acceptance testing,” in 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) . IEEE, 2016, pp. 181–188.
8. W.B. Frakes and K. Kang, “Software reuse research: Status and future,” IEEE Transactions on Software Engineering , Vol. 31, No. 7, 2005, pp. 529–536.
9. R. Capilla, B. Gallina, C. Cetina, and J. Favaro, “Opportunities for software reuse in an uncertain world: From past to emerging trends,” Journal of Software: Evolution and Process , Vol. 31, No. 8, 2019, p. e2217.
10. W.B. Frakes and S. Isoda, “Success factors of systematic reuse,” IEEE software , Vol. 11, No. 5, 1994, pp. 14–19.
11. D. Rombach, “Integrated software process and product lines,” in Software Process Workshop . Springer, 2005, pp. 83–90.
12. M. Ramachandran, “Software re-use assessment for quality,” WIT Transactions on Information and Communication Technologies , Vol. 9, 1970.
13. E.S. de Almeida, A. Alvaro, D. Lucrédio, V.C. Garcia, and S.R. de Lemos Meira, “Rise project: Towards a robust framework for software reuse,” in Proceedings of the International Conference on Information Reuse and Integration . IEEE, 2004, pp. 48–53.
14. J.S. Poulin, Measuring software reuse: principles, practices, and economic models . Addison-Wesley Reading, MA, 1997.
15. M. Irshad, R. Torkar, K. Petersen, and W. Afzal, “Capturing cost avoidance through reuse: systematic literature review and industrial evaluation,” in Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering . ACM, 2016, p. 35.
16. A. Davies, T. Brady, and M. Hobday, “Charting a path toward integrated solutions,” MIT Sloan management review , Vol. 47, No. 3, 2006, p. 39.
17. W.E. Wong, “An integrated solution for creating dependable software,” in Proceedings 24th Annual International Computer Software and Applications Conference. COMPSAC2000 . IEEE, 2000, pp. 269–270.
18. R.J. Mayer, J.W. Crump, R. Fernandes, A. Keen, and M.K. Painter, “Information integration for concurrent engineering (IICE) compendium of methods report,” Knowledge Based Systems Inc., Tech. Rep., 1995.
19. M. Rahman and J. Gao, “A reusable automated acceptance testing architecture for microservices in behavior-driven development,” in Symposium on Service-Oriented System Engineering (SOSE) . IEEE, 2015, pp. 321–325.
20. G. Meszaros, “Agile regression testing using record and playback,” in Companion of the 18th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications . ACM, 2003, pp. 353–360.
21. A.K. Onoma, W.T. Tsai, M. Poonawala, and H. Suganuma, “Regression testing in an industrial environment,” Communications of the ACM , Vol. 41, No. 5, 1998, pp. 81–86.
22. P. Hsia, D. Kung, and C. Sell, “Software requirements and acceptance testing,” Annals of Software Engineering , Vol. 3, No. 1, 1997, pp. 291–317.
23. G.K. Hanssen and B. Haugset, “Automated acceptance testing using fit,” in 42nd Hawaii International Conference on System Sciences . IEEE, 2009, pp. 1–8.
24. E. Pyshkin, M. Mozgovoy, and M. Glukhikh, “On requirements for acceptance testing automation tools in behavior driven software development,” in Proceedings of the 8th Software Engineering Conference in Russia (CEE-SECR) , 2012.
25. G. Liebel, E. Alégroth, and R. Feldt, “State-of-practice in GUI-based system and acceptance testing: An industrial multiple-case study,” in 39th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA) . IEEE, 2013, pp. 17–24.
26. H. Munir and P. Runeson, “Software testing in open innovation: An exploratory case study of the acceptance test harness for Jenkins,” in Proceedings of the International Conference on Software and System Process , 2015, pp. 187–191.
27. G. Melnik and F. Maurer, “The practice of specifying requirements using executable acceptance tests in computer science courses,” in Companion to the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications , 2005, pp. 365–370.
28. M. Hayek, P. Farhat, Y. Yamout, C. Ghorra, and R.A. Haraty, “Web 2.0 testing tools: A compendium,” in International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) . IEEE, 2019, pp. 1–6.
29. P. Gandhi, N.C. Haugen, M. Hill, and R. Watt, “Creating a living specification using FIT documents,” in Agile Development Conference (ADC’05) . IEEE, 2005, pp. 253–258.
30. D. North, “Introducing behaviour driven development,” Better Software Magazine , 2006.
31. E.C. dos Santos and P. Vilain, “Automated acceptance tests as software requirements: An experiment to compare the applicability of fit tables and gherkin language,” in International Conference on Agile Software Development . Springer, 2018, pp. 104–119.
32. C. Solis and X. Wang, “A study of the characteristics of behaviour driven development,” in 37th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA) . IEEE, 2011, pp. 383–387.
33. R. Hametner, D. Winkler, and A. Zoitl, “Agile testing concepts based on keyword-driven testing for industrial automation systems,” in IECON 2012-38th Annual Conference on IEEE Industrial Electronics Society . IEEE, 2012, pp. 3727–3732.
34. E. Bache and G. Bache, “Specification by example with gui tests-how could that work?” in International Conference on Agile Software Development . Springer, 2014, pp. 320–326.
35. A.C. Paiva, D. Maciel, and A.R. da Silva, “From requirements to automated acceptance tests with the RSL language,” in International Conference on Evaluation of Novel Approaches to Software Engineering . Springer, 2019, pp. 39–57.
36. M. Soeken, R. Wille, and R. Drechsler, “Assisted behavior driven development using natural language processing,” in International Conference on Modelling Techniques and Tools for Computer Performance Evaluation . Springer, 2012, pp. 269–287.
37. C. Deng, P. Wilson, and F. Maurer, “Fitclipse: A fit-based eclipse plug-in for executable acceptance test driven development,” in International Conference on Extreme Programming and Agile Processes in Software Engineering . Springer, 2007, pp. 93–100.
38. C.Y. Hsieh, C.H. Tsai, and Y.C. Cheng, “Test-Duo: A framework for generating and executing automated acceptance tests from use cases,” in 8th International Workshop on Automation of Software Test (AST) . IEEE, 2013, pp. 89–92.
39. C.W. Krueger, “Software reuse,” ACM Computing Surveys (CSUR) , Vol. 24, No. 2, 1992, pp. 131–183.
40. D.M. Rafi, K.R.K. Moses, K. Petersen, and M.V. Mäntylä, “Benefits and limitations of automated software testing: Systematic literature review and practitioner survey,” in Proceedings of the 7th International Workshop on Automation of Software Test . IEEE Press, 2012, pp. 36–42.
41. W. Frakes and C. Terry, “Software reuse: metrics and models,” ACM Computing Surveys (CSUR) , Vol. 28, No. 2, 1996, pp. 415–435.
42. W. Tracz, “Where does reuse start?” ACM SIGSOFT Software Engineering Notes , Vol. 15, No. 2, 1990, pp. 42–46.
43. T. Ravichandran and M.A. Rothenberger, “Software reuse strategies and component markets,” Communications of the ACM , Vol. 46, No. 8, 2003, pp. 109–114.
44. P. Mohagheghi and R. Conradi, “Quality, productivity and economic benefits of software reuse: A review of industrial studies,” Empirical Software Engineering , Vol. 12, No. 5, 2007, pp. 471–516.
45. V. Karakostas, “Requirements for CASE tools in early software reuse,” ACM SIGSOFT Software Engineering Notes , Vol. 14, No. 2, 1989, pp. 39–41.
46. J.L. Cybulski, “Introduction to software reuse,” Department of Information Systems, The University of Melbourne, Parkville, Australia , Vol. 11, 1996, p. 12.
47. W. Lam, J.A. McDermid, and A. Vickers, “Ten steps towards systematic requirements reuse,” Requirements Engineering , Vol. 2, No. 2, 1997, pp. 102–113.
48. R.G. Fichman and C.F. Kemerer, “Incentive compatibility and systematic software reuse,” Journal of Systems and Software , Vol. 57, No. 1, 2001, pp. 45–60.
49. A. Genaid et al., “Connecting user stories and code for test development,” in Third International Workshop on Recommendation Systems for Software Engineering (RSSE) . IEEE, 2012, pp. 33–37.
50. L. Crispin and T. House, “Testing in the fast lane: Automating acceptance testing in an extreme programming environment,” in XP Universe Conference . Citeseer, 2001.
51. L.P. Binamungu, S.M. Embury, and N. Konstantinou, “Maintaining behaviour driven development specifications: Challenges and opportunities,” in 25th International Conference on Software Analysis, Evolution and Reengineering (SANER) . IEEE, 2018, pp. 175–184.
52. M. Irshad, J. Börster, and K. Petersen, “Supporting refactoring of BDD specifications – An empirical study,” Information and Software Technology , 2022.
53. R. Angmo and M. Sharma, “Performance evaluation of web based automation testing tools,” in 5th International Conference – Confluence The Next Generation Information Technology Summit (Confluence) . IEEE, 2014, pp. 731–735.
54. S. Park and F. Maurer, “A literature review on story test driven development,” in International Conference on Agile Software Development . Springer, 2010, pp. 208–213.
55. Q. Xie, “Developing cost-effective model-based techniques for GUI testing,” in Proceedings of the 28th International Conference on Software Engineering . ACM, 2006, pp. 997–1000.
56. R. Borg and M. Kropp, “Automated acceptance test refactoring,” in Proceedings of the 4th Workshop on Refactoring Tools . ACM, 2011, pp. 15–21.
57. C. Schwarz, S.K. Skytteren, and T.M. Ovstetun, “AutAT: An eclipse plugin for automatic acceptance testing of web applications,” in Companion to the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications . ACM, 2005, pp. 182–183.
58. B. Fitzgerald, N.L. Russo, and T. O’Kane, “Software development method tailoring at motorola,” Communications of the ACM , Vol. 46, No. 4, 2003, pp. 64–70.
59. P. Raulamo-Jurvanen, M. Mäntylä, and V. Garousi, “Choosing the right test automation tool: a grey literature review of practitioner sources,” in Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering , 2017, pp. 21–30.
60. A. Egbreghts, “A literature review of behavior driven development using grounded theory,” in 27th Twente Student Conference on IT. , 2017.
61. V. Venkatesh, J.Y. Thong, and X. Xu, “Consumer acceptance and use of information technology: Extending the Unified Theory of Acceptance and Use of Technology,” MIS quarterly , 2012, pp. 157–178.
62. T. Gorschek, P. Garre, S. Larsson, and C. Wohlin, “A model for technology transfer in practice,” IEEE software , Vol. 23, No. 6, 2006, pp. 88–95.
63. M. Finsterwalder, “Automating acceptance tests for GUI applications in an extreme programming environment,” in Proceedings of the 2nd International Conference on eXtreme Programming and Flexible Processes in Software Engineering . Addison-Wesley Boston MA, 2001, pp. 114–117.
64. S. Park and F. Maurer, “An extended review on story test driven development,” University of Calgary, Tech. Rep., 2010.
65. H. Mili, F. Mili, and A. Mili, “Reusing software: Issues and research directions,” IEEE Transactions on Software Engineering , Vol. 21, No. 6, 1995, pp. 528–562.
66. R. Feldt, S. Poulding, D. Clark, and S. Yoo, “Test set diameter: Quantifying the diversity of sets of test cases,” in International Conference on Software Testing, Verification and Validation (ICST) . IEEE, 2016, pp. 223–233.
67. W.H. Gomaa, A.A. Fahmy et al., “A survey of text similarity approaches,” International Journal of Computer Applications , Vol. 68, No. 13, 2013, pp. 13–18.
68. M. Irshad, K. Petersen, and S. Poulding, “A systematic literature review of software requirements reuse approaches,” Information and Software Technology , Vol. 93, 2018, pp. 223–245.
69. M. Irshad, Source Code for Scripts , 2021. [Online]. https://zenodo.org/record/4765079
70. L. Amar and J. Coffey, “Measuring the benefits of software reuse-examining three different approaches to software reuse,” Dr Dobbs Journal , Vol. 30, No. 6, 2005, pp. 73–76.
71. K. Petersen and C. Wohlin, “Context in industrial software engineering research,” in 3rd International Symposium on Empirical Software Engineering and Measurement . IEEE, 2009, pp. 401–404.
72. C. Wohlin, M. Höst, and K. Henningsson, “Empirical research methods in software engineering,” in Empirical methods and studies in software engineering . Springer, 2003, pp. 7–23.
73. M. Irshad, “Questionnaire: The reusability of automated acceptance tests,” 2021. [Online]. https://zenodo.org/record/4765102
74. J.S. Molléri, K. Petersen, and E. Mendes, “An empirically evaluated checklist for surveys in software engineering,” Information and Software Technology , Vol. 119, 2020, p. 106240.
75. B.G. Glaser, A.L. Strauss, and E. Strutzel, “The discovery of grounded theory; strategies for qualitative research,” Nursing research , Vol. 17, No. 4, 1968, p. 364.
76. J. Stoustrup, “Successful industry/academia cooperation: From simple via complex to lucid solutions,” European Journal of Control , Vol. 19, No. 5, 2013, pp. 358–368.
77. T.C. Lethbridge, S.E. Sim, and J. Singer, “Studying software engineers: Data collection techniques for software field studies,” Empirical software engineering , Vol. 10, No. 3, 2005, pp. 311–341.
78. M. Irshad, “Search and assessment data,” 01 2021. [Online]. http://shorturl.at/juIZ6
79. P.M. Vitányi, F.J. Balbach, R.L. Cilibrasi, and M. Li, “Normalized information distance,” in Information theory and statistical learning . Springer, 2009, pp. 45–82.
80. B.Y. Pratama and R. Sarno, “Personality classification based on Twitter text using Naive Bayes, KNN and SVM,” in Proceedings of the International Conference on Data and Software Engineering , 2015, pp. 170–174.
81. J.C. Corrales, Behavioral matchmaking for service retrieval , Ph.D. dissertation, Université de Versailles-Saint Quentin en Yvelines, 2008.
82. S.S. Choi, S.H. Cha, and C.C. Tappert, “A survey of binary similarity and distance measures,” Journal of Systemics, Cybernetics and Informatics , Vol. 8, No. 1, 2010, pp. 43–48.
83. “Parameterize BDD tests,” 2021. [Online]. https://support.smartbear.com/testcomplete/docs/bdd/parameterize.html
84. G. Gui and P.D. Scott, “Coupling and cohesion measures for evaluation of component reusability,” in Proceedings of the 2006 International Workshop on Mining Software Repositories , 2006, pp. 18–21.
85. S. Park and F. Maurer, “Communicating domain knowledge in executable acceptance test driven development,” in International Conference on Agile Processes and Extreme Programming in Software Engineering . Springer, 2009, pp. 23–32.
86. A.A. Kiani, Y. Hafeez, M. Imran, and S. Ali, “A dynamic variability management approach working with agile product line engineering practices for reusing features,” The Journal of Supercomputing , 2021, pp. 1–42.
87. I.F. Da Silva, “An agile approach for software product lines scoping,” in Proceedings of the 16th International Software Product Line Conference-Volume 2 , 2012, pp. 225–228.
88. P. Runeson, M. Host, A. Rainer, and B. Regnell, Case study research in software engineering: Guidelines and examples . John Wiley and Sons, 2012.