Anjali Goyal, Neetu Sardana
Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR). When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix) leaving the others with the same resolution (NR). To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs’ sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, NRFixer, which predicts the probability of NR bug fixation, is proposed. NRFixer was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1) and the other dimension additionally incorporates the sentiments (model-2) of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest). The bug reports of Firefox and Eclipse projects were used to test NRFixer. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5% for various classifiers.
- M. Erfani Joorabchi, M. Mirzaaghaei, and A. Mesbah, “Works for me! characterizingnon-reproducible bug reports,” in Proceedings ofthe 11th Working Conference on Mining SoftwareRepositories. ACM, 2014, pp. 62–71.
- E. Murphy-Hill, T. Zimmermann, C. Bird, andN. Nagappan, “The design space of bug fixes andhow developers navigate it,” IEEE Transactionson Software Engineering, Vol. 41, No. 1, 2015,pp. 65–81.
- T. Wilson, J. Wiebe, and P. Hoffmann, “Recognizingcontextual polarity in phrase-level sentimentanalysis,” in Proceedings of the Conferenceon Human Language Technology and EmpiricalMethods in Natural Language Processing. Associationfor Computational Linguistics, 2005, pp.347–354.
- F. Jurado and P. Rodriguez, “Sentiment analysisin monitoring software development processes:An exploratory case study on GitHub’s projectissues,” Journal of Systems and Software, Vol.104, 2015, pp. 82–89.
- A. Murgia, P. Tourani, B. Adams, and M. Ortu,“Do developers feel emotions? an exploratoryanalysis of emotions in software artifacts,” inProceedings of the 11th working conference onmining software repositories. ACM, 2014, pp.262–271.
- P. Tourani, Y. Jiang, and B. Adams, “Monitoringsentiment in open source mailing lists:Exploratory study on the apache ecosystem,” inProceedings of 24th Annual International Conferenceon Computer Science and Software Engineering.IBM Corp., 2014, pp. 34–44.
- D. Garcia, M.S. Zanetti, and F. Schweitzer, “Therole of emotions in contributors activity: A casestudy on the Gentoo community,” in The ThirdInternational Conference on Cloud and GreenComputing (CGC). IEEE, 2013, pp. 410–417.
- D. Pletea, B. Vasilescu, and A. Serebrenik, “Securityand emotion: Sentiment analysis of securitydiscussions on GitHub,” in Proceedings ofthe 11th working conference on mining softwarerepositories. ACM, 2014, pp. 348–351.
- E. Guzman, D. Azócar, and Y. Li, “Sentimentanalysis of commit comments in GitHub: An empiricalstudy,” in Proceedings of the 11th WorkingConference on Mining Software Repositories.ACM, 2014, pp. 352–355.
- G. Destefanis, M. Ortu, S. Counsell, S. Swift,M. Marchesi, and R. Tonelli, “Software development:Do good manners matter?” PeerJ ComputerScience, Vol. 2, 2016, p. e73.
- H. Valdivia Garcia and E. Shihab, “Characterizingand predicting blocking bugs in opensource projects,” in Proceedings of the 11th WorkingConference on Mining Software Repositories.ACM, 2014, pp. 72–81.
- E. Shihab, A. Ihara, Y. Kamei, W.M. Ibrahim,M. Ohira, B. Adams, A.E. Hassan, and K.i.Matsumoto, “Studying re-opened bugs in opensource software,” Empirical Software Engineering,Vol. 18, No. 5, 2013, pp. 1005–1042.
- R. Hewett and P. Kijsanayothin, “On modelingsoftware defect repair time,” Empirical SoftwareEngineering, Vol. 14, No. 2, 2009, p. 165.
- P.J. Guo, T. Zimmermann, N. Nagappan, andB. Murphy, “Characterizing and predictingwhich bugs get fixed: An empirical study ofMicrosoft Windows,” in ACM/IEEE 32nd InternationalConference on Software Engineering,Vol. 1. IEEE, 2010, pp. 495–504.
- T. Zimmermann, N. Nagappan, P.J. Guo, andB. Murphy, “Characterizing and predictingwhich bugs get reopened,” in Proceedings of the34th International Conference on Software Engineering.IEEE Press, 2012, pp. 1074–1083.
- Python NLTK sentiment analysis with textclassification demo, (2016, Sep.). [Online].http://text-processing.com/demo/sentiment/
- A. Padhye, Classification methods, (2016, Sep.).[Online]. http://www.d.umn.edu/~padhy005/Chapter5.html
- J.R. Quinlan, C4.5: programs for machine learning.Elsevier, 2014.
- L. Breiman, “Random forests,” Machine learning,Vol. 45, No. 1, 2001, pp. 5–32.
- J. Han, J. Pei, and M. Kamber, Data mining:concepts and techniques. Elsevier, 2011.
- M. Mitchell, An introduction to genetic algorithms.MIT press, 1998.
- R. Jongeling, S. Datta, and A. Serebrenik,“Choosing your weapons: On sentiment analysistools for software engineering research,” in IEEEInternational Conference on Software Maintenanceand Evolution (ICSME). IEEE, 2015, pp.531–535.
|||Anjali Goyal, Neetu Sardana, "NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs", In e-Informatica Software Engineering Journal, vol. 11, iss. 1, pp. 103-116, 2017.|