e-Informatica Software Engineering Journal Milestone-Oriented Usage of Key Performance Indicators – An Industrial Case Study

Milestone-Oriented Usage of Key Performance Indicators – An Industrial Case Study

2018
[1]Miroslaw Staron, Kent Niesel and Niclas Bauman, "Milestone-Oriented Usage of Key Performance Indicators – An Industrial Case Study", In e-Informatica Software Engineering Journal, vol. 12, no. 1, pp. 217–236, 2018. DOI: 10.5277/e-Inf180109.

Download article (PDF)Get article BibTeX file

Authors

Miroslaw Staron, Kent Niesel, Niclas Bauman

Abstract

Background: Key Performance Indicators are a common way of quantitative monitoring of project progress in modern companies. Although they are widely used in practice, there is little evidence on how they are set, and how many of them are used in large product development projects. Goal: The goal of this paper is to explore how KPIs are used in practice in a large company. In particular, it is explored whether KPIs are used continuously or only during short, predefined periods of time. It is also explored whether software-related KPIs are reported differently from non-software-related KPIs. Method: A case study of 12 projects at the Volvo Car Group in Sweden was conducted. The data from the project progress reporting on tools was collected and triangulated with data from interviews conducted with experts from the company. Results: KPIs are reported mostly before the milestones and the manual assessment of their status is equally important as the automated data provision in the KPI reporting system. The trend of reporting software-related KPIs is very similar to the non-software-related KPIs. Conclusions: Despite the documented good practices of using KPIs for project monitoring, it is difficult to develop a clear status-picture solely using quantitative data from progress reporting tools. It was also shown that the trends in reporting the software-related KPIs are similar to the trends in reporting the non-software related KPIs.

Keywords

software metrics, key performance indicator, project management, case study

References

[1]   W.S. Humphrey, Managing technical people: innovation, teamwork, and the software process. Addison-Wesley Longman Publishing Co., Inc., 1996.

[2]   M. Staron, W. Meding, J. Hansson, C. Höglund, K. Niesel, and V. Bergmann, “Dashboards for continuous monitoring of quality for software product under development,” System Qualities and Software Architecture (SQSA), 2013.

[3]   A.A. De Waal, “The future of the balanced scorecard: an interview with Professor. Dr Robert S. Kaplan,” Measuring Business Excellence, Vol. 7, No. 1, 2003, pp. 30–35.

[4]   “Does measurement theory impact project performance?” Procedia – Social and Behavioral Sciences, Vol. 119, 2014, pp. 635 – 644.

[5]   A. Neely, B. Marr, G. Roos, S. Pike, and O. Gupta, “Towards the third generation of performance measurement,” Controlling, Vol. 15, No. 3/4, 2003, pp. 129–135.

[6]   International Standard Organization and International Electrotechnical Commission, “ISO/IEC 15939 – software and systems engineering, software measurement process,” ISO/IEC, Tech. Rep., 2007.

[7]   R.S. Kaplan and D.P. Norton, “Putting the balanced scorecard to work,” Performance measurement, management, and appraisal sourcebook, Vol. 66, 1995, p. 17511.

[8]   M. Staron, W. Meding, J. Hansson, C. Höglund, K. Niesel, and V. Bergmann, “Dashboards for continuous monitoring of quality for software product under development,” System Qualities and Software Architecture (SQSA), 2014.

[9]   R.S. Kaplan and D.P. Norton, The balanced scorecard: Translating strategy into action. Harvard Business Press, 1996.

[10]   C. Bentley, Practical Prince2. The Stationery Office, 2005.

[11]   C. Fornell, “A national customer satisfaction barometer: The Swedish experience,” Journal of Marketing, 1992, pp. 6–21.

[12]   A. Assila, K. Marçal de Oliveira, and H. Ezzedine, “Integration of subjective and objective usability evaluation based on ISO/IEC 15939: A case study for traffic supervision systems,” International Journal of Human–Computer Interaction, Vol. 32, No. 12, 2016, pp. 931–955.

[13]   M. Staron, K. Niesel, and W. Meding, “Selecting the right visualization of indicators and measures–dashboard selection model,” in Software Measurement. Springer, 2015, pp. 130–143.

[14]   M. Staron, W. Meding, and C. Nilsson, “A framework for developing measurement systems and its industrial evaluation,” Information and Software Technology, Vol. 51, No. 4, 2008, pp. 721–737.

[15]   A.A. Mughal, A Framework for Implementing Software Measurement Programs in Small and Medium Enterprises, Ph.D. dissertation, University of Otago, 2017.

[16]   M. Staron, “Critical role of measures in decision processes: Managerial and technical measures in the context of large software development organizations,” Information and Software Technology, Vol. 54, No. 8, 2012, pp. 887–899.

[17]   P. Runeson, M. Host, A. Rainer, and B. Regnell, Case study research in software engineering: Guidelines and examples. John Wiley & Sons, 2012.

[18]   B. Azvine, Z. Cui, and D. Nauck, “Towards real-time business intelligence,” BT Technology Journal, Vol. 23, No. 3, 2005, pp. 214–225.

[19]   M. Staron, J. Hansson, R. Feldt, A. Henriksson, W. Meding, S. Nilsson, and C. Hoglund, “Measuring and visualizing code stability–a case study at three companies,” in Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2013 Joint Conference of the 23rd International Workshop on. IEEE, 2013, pp. 191–200.

[20]   R. Feldt, M. Staron, E. Hult, and T. Liljegren, “Supporting software decision meetings: Heatmaps for visualising test and code measurements,” in Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on. IEEE, 2013, pp. 62–69.

[21]   M. Staron, W. Meding, C. Hoglund, and J. Hansson, “Identifying implicit architectural dependencies using measures of source code change waves,” in Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on. IEEE, 2013, pp. 325–332.

[22]   C. Wohlin, P. Runeson, M. Host, M.C. Ohlsson, B. Regnell, and A. Wesslèn, Experimentation in Software Engineering: An Introduction. Boston MA: Kluwer Academic Publisher, 2000.

[23]   M. Todorović, Z. Mitrović, and D. Bjelica, “Measuring project success in project-oriented organizations,” Management, Vol. 68, 2013, pp. 41–48.

[24]   L. Pilorget, “Process performance indicators and reporting,” in Implementing IT Processes. Springer, 2015, pp. 177–197.

[25]   J. Colin and M. Vanhoucke, “Developing a framework for statistical process control approaches in project management,” International Journal of Project Management, 2015.

[26]   A. Jaafari, “Project and program diagnostics: A systemic approach,” International Journal of Project Management, Vol. 25, No. 8, 2007, pp. 781–790.

[27]   M.L. Drury-Grogan, “Performance on agile teams: Relating iteration objectives and critical decisions to project management success factors,” Information and Software Technology, Vol. 56, No. 5, 2014, pp. 506–515.

[28]   L. Raymond and F. Bergeron, “Project management information systems: An empirical study of their impact on project managers and project success,” International Journal of Project Management, Vol. 26, No. 2, 2008, pp. 213–220.

[29]   G. Marques, D. Gourc, and M. Lauras, “Multi-criteria performance analysis for decision making in project management,” International Journal of Project Management, Vol. 29, No. 8, 2011, pp. 1057–1069.

[30]   M. Staron and W. Meding, “Transparent measures: cost-efficient measurement processes in SE,” in Software Technology Transfer Workshop, Kista, Sweden, 2011.

[31]   M. Staron, W. Meding, and K. Palm, “Release readiness indicator for mature agile and lean software development projects,” in Agile Processes in Software Engineering and Extreme Programming. Springer, 2012, pp. 93–107.

[32]   H. Sanchez and B. Robert, “Measuring portfolio strategic performance using key performance indicators,” Project Management Journal, Vol. 41, No. 5, 2010, pp. 64–73.

[33]   J. Bilderbeek et al., “R&d performance measurement: more than choosing a set of metrics,” R&D Management, Vol. 29, No. 1, 1999, pp. 35–46.

[34]   F. Spangenberg and D. Göhlich, “Technology roadmapping based on key performance indicators,” in Smart Product Engineering. Springer, 2013, pp. 377–386.

[35]   J.W. Lainhart IV, “COBITTM: A methodology for managing and controlling information and information technology risks and vulnerabilities,” Journal of Information Systems, Vol. 14, No. s-1, 2000, pp. 21–25.

[36]   V. Basili, J. Heidrich, M. Lindvall, J. Munch, M. Regardie, and A. Trendowicz, “GQM+ Strategies–Aligning Business Strategies with Software Measurement,” in Empirical Software Engineering and Measurement, 2007. ESEM 2007. First International Symposium on. IEEE, 2007, pp. 488–490.

[37]   J. Münch, F. Fagerholm, P. Kettunen, M. Pagels, and J. Partanen, “Experiences and insights from applying GQM+Strategies in a systems product development organisation,” in Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on. IEEE, 2013, pp. 70–77.

©2015 e-Informatyka.pl, All rights reserved.

Built on WordPress Theme: Mediaphase Lite by ThemeFurnace.