e-Informatica Software Engineering Journal A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering

A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering

2023
[1]Muhammad Usman, Nauman Bin Ali and Claes Wohlin, "A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering", In e-Informatica Software Engineering Journal, vol. 17, no. 1, pp. 230105, 2023. DOI: 10.37190/e-Inf230105.

Download article (PDF)Get article BibTeX file

Authors

Muhammad Usman, Nauman Bin Ali, Claes Wohlin

Abstract

Background: Systematic literature reviews (SLRs) have become a standard practice as part of software engineering (SE) research, although their quality varies. To build on the reviews, both for future research and industry practice, they need to be of high quality.
Aim: To assess the quality of SLRs in SE, we put forward an appraisal instrument for SLRs.
Method: A well-established appraisal instrument from research in healthcare was used as a starting point to develop the instrument. It is adapted to SE using guidelines, checklists, and experiences from SE. The first version was reviewed by four external experts on SLRs in SE and updated based on their feedback. To demonstrate its use, the updated version was also used by the authors to assess a sample of six selected systematic literature studies.
Results: The outcome of the research is an appraisal instrument for quality assessment of SLRs in SE. The instrument includes 15 items with different options to capture the quality. The instrument also supports consolidating the items into groups, which are then used to assess the overall quality of an SLR.
Conclusion: The presented instrument may be helpful support for an appraiser in assessing the quality of SLRs in SE.

Keywords

Systematic reviews, quality assessment, critical appraisal, AMSTAR 2, systematic literature review, tertiary study

References

1. B. Kitchenham, “Procedures for Performing Systematic Reviews,” Keele University, Keele, UK, Tech. Rep., 2004.

2. N.B. Ali and M. Usman, “Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy,” Inf. Softw. Technol. , Vol. 99, 2018, pp. 133–147.

3. Centre for Reviews and Dissemination, University of York, “Database of abstracts of reviews of effects (DARE),” https://www.crd.york.ac.uk/CRDWeb/AboutPage.asp, 2019, 29 Nov, 2019. [Online]. https://www.crd.york.ac.uk/CRDWeb/AboutPage.asp

4. D. Budgen, P. Brereton, S. Drummond, and N. Williams, “Reporting systematic reviews: Some lessons from a tertiary study,” Inf. Softw. Technol. , Vol. 95, 2018, pp. 62 – 74.

5. D. Costal, C. Farré, X. Franch, and C. Quer, “How tertiary studies perform quality assessment of secondary studies in software engineering,” in Proceedings of the XXIV Iberoamerican Conference on Software Engineering . Curran Associates Inc., 2021, p. 1.

6. B. Kitchenham and S. Charters, “Guidelines for performing systematic literature reviews in software engineering,” School of Computer Science and Mathematics, Keele University, Keele, UK, Keele, UK, Tech. Rep., 2007.

7. B.J. Shea, B.C. Reeves, G. Wells, M. Thuku, C. Hamel et al., “AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both,” BMJ , Vol. 358, 2017.

8. J. Higgins, J. Thomas, J. Chandler, M. Cumpston, T. Li et al., Cochrane handbook for systematic reviews of interventions . Wiley, January 2019.

9. B.J. Shea, L.M. Bouter, J. Peterson, M. Boers, N. Andersson et al., “External validation of a measurement tool to assess systematic reviews (AMSTAR),” PLoS One , Vol. 2, No. 12, 2007, p. e1350.

10. B.U. Burda, H.K. Holmer, and S.L. Norris, “Limitations of a measurement tool to assess systematic reviews (AMSTAR) and suggestions for improvement,” Systematic reviews , Vol. 5, No. 1, 2016, p. 58.

11. U. Wegewitz, B. Weikert, A. Fishta, A. Jacobs, and D. Pieper, “Resuming the discussion of AMSTAR: What can (should) be made better?” BMC Medical Research Methodology , Vol. 16, No. 1, Dec. 2016, pp. 111, s12874–016–0183–6. [Online]. http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0183-6

12. J.A.C. Sterne, J. Savović, M.J. Page, R.G. Elbers, N.S. Blencowe et al., “Rob 2: a revised tool for assessing risk of bias in randomised trials,” BMJ , Vol. 366, 2019.

13. T. Greenhalgh, “How to read a paper: Papers that summarise other papers (systematic reviews and meta-analyses),” BMJ , Vol. 315, No. 7109, 1997, pp. 672–675.

14. K.S. Khan, G. Ter Riet, J. Glanville, A.J. Sowden, J. Kleijnen et al., “Undertaking systematic reviews of research on effectiveness: CRD’s guidance for carrying out or commissioning reviews,” University of York, UK, Tech. Rep. 4 (2n, 2001.

15. B. Kitchenham, R. Pretorius, D. Budgen, O. Pearl Brereton, M. Turner et al., “Systematic literature reviews in software engineering – A tertiary study,” Inf. Softw. Technol. , Vol. 52, No. 8, Aug. 2010, pp. 792–805.

16. D.S. Cruzes and T. Dybå, “Research synthesis in software engineering: A tertiary study,” Inf. Softw. Technol. , Vol. 53, No. 5, 2011, pp. 440 – 455.

17. I. Nurdiani, J. Börstler, and S.A. Fricker, “The impacts of agile and lean practices on project constraints: A tertiary study,” J. Syst. Softw. , Vol. 119, 2016, pp. 162–183.

18. N.B. Ali, K. Petersen, and C. Wohlin, “A systematic literature review on the industrial use of software process simulation,” J. Syst. Softw. , Vol. 97, 2014, pp. 65–85.

19. T. Dybå and T. Dingsøyr, “Strength of evidence in systematic reviews in software engineering,” in Proceedings of the Second International Symposium on Empirical Software Engineering and Measurement, ESEM , H.D. Rombach, S.G. Elbaum, and J. Münch, Eds. ACM, 2008, pp. 178–187.

20. D.F. Stroup, J.A. Berlin, S.C. Morton, I. Olkin, G.D. Williamson et al., “Meta-analysis of observational studies in epidemiology: a proposal for reporting,” JAMA , Vol. 283, No. 15, 2000, pp. 2008–2012.

21. N.B. Ali and M. Usman, “A critical appraisal tool for systematic literature reviews in software engineering,” Inf. Softw. Technol. , Vol. 112, 2019, pp. 48 – 50.

22. A. Ampatzoglou, S. Bibi, P. Avgeriou, M. Verbeek, and A. Chatzigeorgiou, “Identifying, categorizing and mitigating threats to validity in software engineering secondary studies,” Inf. Softw. Technol. , Vol. 106, 2019, pp. 201–230.

23. B.A. Kitchenham, L. Madeyski, and D. Budgen, “SEGRESS: Software Engineering Guidelines for REporting Secondary Studies,” IEEE Transactions on Software Engineering , 2022, pp. 1–1.

24. J.S. Molléri, K. Petersen, and E. Mendes, “CERSE-Catalog for empirical research in software engineering: A systematic mapping study,” Inf. Softw. Technol. , Vol. 105, 2019, pp. 117–149.

25. B.A. Kitchenham, D. Budgen, and P. Brereton, Evidence-based software engineering and systematic reviews . CRC press, 2015, Vol. 4.

26. K. Petersen, S. Vakkalanka, and L. Kuzniarz, “Guidelines for conducting systematic mapping studies in software engineering: An update,” Inf. Softw. Technol. , Vol. 64, 2015, pp. 1 – 18.

27. C. Wohlin, “Guidelines for snowballing in systematic literature studies and a replication in software engineering,” in Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering . ACM, 2014, pp. 38:1–38:10.

28. B.A. Kitchenham, T. Dyba, and M. Jorgensen, “Evidence-based software engineering,” in Proceedings. 26th International Conference on Software Engineering . IEEE, 2004, pp. 273–281.

29. F.Q. Da Silva, A.L. Santos, S.C. Soares, A.C.C. França, and C.V. Monteiro, “A critical appraisal of systematic reviews in software engineering from the perspective of the research questions asked in the reviews,” in Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement . IEEE – ACM, 2010, pp. 1–4.

30. M. Riaz, M. Sulayman, N. Salleh, and E. Mendes, “Experiences conducting systematic reviews from novices’ perspective,” in 14th International Conference on Evaluation and Assessment in Software Engineering (EASE) . BCS Learning & Development Ltd., Swindon United Kingdom, 2010, pp. 1–10.

31. E. Mendes, C. Wohlin, K.R. Felizardo, and M. Kalinowski, “When to update systematic literature reviews in software engineering,” J. Syst. Softw. , Vol. 167, 2020, p. 110607.

32. B.J. Shea, J.M. Grimshaw, G.A. Wells, M. Boers, N. Andersson et al., “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” BMC Med. Res. Methodol. , Vol. 7, No. 1, 2007, p. 10.

33. A. Gates, M. Gates, G. Duarte, M. Cary, M. Becker et al., “Evaluation of the reliability, usability, and applicability of AMSTAR, AMSTAR 2, and ROBIS: protocol for a descriptive analytic study,” Systematic reviews , Vol. 7, No. 1, 2018, p. 85.

34. D. Pieper, R.B. Buechter, L. Li, B. Prediger, and M. Eikermann, “Systematic review found AMSTAR, but not r (evised)-AMSTAR, to have good measurement properties,” J. Clin. Epidemiol. , Vol. 68, No. 5, 2015, pp. 574–583.

35. C. Zapata, “Integration of usability and agile methodologies: a systematic review,” Design, User Experience, and Usability: Design Discourse , 2015, pp. 368–378.

36. M. Turner, B. Kitchenham, P. Brereton, S. Charters, and D. Budgen, “Does the technology acceptance model predict actual use? a systematic literature review,” Information and software technology , Vol. 52, No. 5, 2010, pp. 463–479.

37. O. Dieste and N. Juristo, “Systematic review and aggregation of empirical studies on elicitation techniques,” IEEE Transactions on Software Engineering , Vol. 37, No. 2, 2010, pp. 283–304.

38. A. Idri, F. azzahra Amazal, and A. Abran, “Analogy-based software development effort estimation: A systematic mapping and review,” Information and Software Technology , Vol. 58, 2015, pp. 206–230.

39. M. Daneva and B. Lazarov, “Requirements for smart cities: Results from a systematic review of literature,” in 2018 12th International Conference on Research Challenges in Information Science (RCIS) . IEEE, 2018, pp. 1–6.

40. D. Ameller, X. Burgués, O. Collell, D. Costal, X. Franch et al., “Development of service-oriented architectures using model-driven development: A mapping study,” Information and Software Technology , Vol. 62, 2015, pp. 42–66.

41. M.U. Khan, S. Sherin, M.Z. Iqbal, and R. Zahid, “Landscaping systematic mapping studies in software engineering: A tertiary study,” Journal of Systems and Software , Vol. 149, 2019, pp. 396–436.

42. K. Curcio, R. Santana, S. Reinehr, and A. Malucelli, “Usability in agile software development: A tertiary study,” Computer Standards & Interfaces , Vol. 64, 2019, pp. 61–77.

43. H. Cadavid, V. Andrikopoulos, and P. Avgeriou, “Architecting systems of systems: A tertiary study,” Information and Software Technology , Vol. 118, 2020, p. 106202.

44. L. Yang, H. Zhang, H. Shen, X. Huang, X. Zhou et al., “Quality assessment in systematic literature reviews: A software engineering perspective,” Inf. Softw. Technol. , 2020, p. 106397.

45. C. Wohlin, “Writing for synthesis of evidence in empirical software engineering,” in Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement , ESEM ’14, ACM – IEEE. New York, NY, USA: Association for Computing Machinery, 2014, pp. 46:1–46:4.

©2015 e-Informatyka.pl, All rights reserved.

Built on WordPress Theme: Mediaphase Lite by ThemeFurnace.