e-Informatica Software Engineering Journal Microservice-Oriented Workload Prediction Using Deep Learning

Microservice-Oriented Workload Prediction Using Deep Learning

2022
[1]Sebastian Ştefan and Virginia Niculescu, "Microservice-Oriented Workload Prediction Using Deep Learning", In e-Informatica Software Engineering Journal, vol. 16, no. 1, pp. 220107, 2022. DOI: 10.37190/e-Inf220107.

Download article (PDF)Get article BibTeX file

Authors

Sebastian Ştefan, Virginia Niculescu

Abstract

Background: Service oriented architectures are becoming increasingly popular due to their flexibility and scalability which makes them a good fit for cloud deployments.
Aim: This research aims to study how an efficient workload prediction mechanism for a practical proactive scaler, could be provided. Such a prediction mechanism is necessary since in order to fully take advantage of on-demand resources and reduce manual tuning, an auto-scaling, preferable predictive, approach is required, which means increasing or decreasing the number of deployed services according to the incoming workloads.
Method: In order to achieve the goal, a workload prediction methodology that takes into account microservice concerns is proposed. Since, this should be based on a performant model for prediction, several deep learning algorithms were chosen to be analysed against the classical approaches from the recent research. Experiments have been conducted in order to identify the most appropriate prediction model.
Results: The analysis emphasises very good results obtained using the MLP (MultiLayer Perceptron) model, which are better than those obtained with classical time series approaches, with a reduction of the mean error prediction of 49%, when using as data, two Wikipedia traces for 12 days and with two different time windows: 10 and 15min.
Conclusion: The tests and the comparison analysis lead to the conclusion that considering the accuracy, but also the computational overhead and the time duration for prediction, MLP model qualifies as a reliable foundation for the development of proactive microservice scaler applications.

Keywords

microservices, web-services, workload-prediction, performance-modeling, microservice-applications, microservice scaler

References

1. T. Erl, Service-Oriented Architecture: Analysis and Design for Services and Microservices, 2nd ed. Springer International Publishing, 2016.

2. S. Newman, Building Microservices: Designing Fine-Grained Systems, 2nd ed. O’Reilly Media, 2021.

3. Kong and Inc., “2020 Digital Innovation Benchmark,” “https://konghq.com/resources/digital-innovation-benchmark-2020/, 2019, released on konghq website.

4. M. Villamizar, O. Garcés, L. Ochoa, H. Castro, L. Salamanca et al., “Infrastructure Cost Comparison of Running Web Applications in the Cloud Using AWS Lambda and Monolithic and Microservice Architectures,” in Proceedings of 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2016, pp. 179–182.

5. R.V. Rajesh, Spring Microservices. Packt Publishing, 2016.

6. R.N. Calheiros, E. Masoumi, R. Ranjan, and R. Buyya, “Workload prediction using ARIMA model and its impact on cloud applications qos,” IEEE Transactions on Cloud Computing, Vol. 3, No. 4, 2015, pp. 449–458.

7. H. Mi, H. Wang, G. Yin, Y. Zhou, D. Shi et al., “Online self-reconfiguration with performance guarantee for energy-efficient large-scale cloud computing data centers,” in Proceedings of 2010 IEEE International Conference on Services Computing, 2010, pp. 514–521.

8. M.S. Aslanpour, M. Ghobaei-Arani, and A. Toosi, “Auto-scaling web applications in clouds: A cost-aware approach,” Journal of Network and Computer Applications, Vol. 95, 07 2017, pp. 26–41.

9. I.K. Kim, W. Wang, Y. Qi, and M. Humphrey, “Cloudinsight: Utilizing a council of experts to predict future cloud application workloads,” in Proceedings of the 11th International Conference on Cloud Computing (CLOUD), 2018, pp. 41–48.

10. J. Kumar and A.K. Singh, “Workload prediction in cloud using artificial neural network and adaptive differential evolution,” Future Generation Computer Systems, Vol. 81, 2018, pp. 41 – 52.

11. J.C.B. Gamboa, “Deep learning for time-series analysis,” CoRR, Vol. abs/1701.01887, 2017. [Online]. http://arxiv.org/abs/1701.01887

12. P. Udom and N. Phumchusri, “A comparison study between time series model and ARIMA model for sales forecasting of distributor in plastic industry,” IOSR Journal of Engineering (IOSRJEN), Vol. 4, No. 2, 2014, pp. 32–38.

13. K.I. Stergiou, “Short-term fisheries forecasting: comparison of smoothing, ARIMA and regression techniques,” Journal of Applied Ichthyology, Vol. 7, No. 4, 1991, pp. 193–204. [Online]. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1439-0426.1991.tb00597.x

14. J. Zhu, R. Zhang, B. Fu, and R. Jin, “Comparison of ARIMA model and exponential smoothing model on 2014 air quality index in Yanqing County, Beijing, China,” Applied and Computational Mathematics, Vol. 4, No. 6, 2015, pp. 456 – 461.

15. A. Khan, X. Yan, S. Tao, and N. Anerousis, “Workload characterization and prediction in the cloud: A multiple time series approach,” in Proceedings of 2012 IEEE Network Operations and Management Symposium, 2012, pp. 1287–1294.

16. M.D. Syer, W. Shang1, Z.M. Jiang, and A.E. Hassan, “Continuous validation of performance test workloads.” Automated Software Engineering, Vol. 24, 3 2016, pp. 189–231.

17. G. Urdaneta, G. Pierre, and M. van Steen, “Wikipedia workload analysis for decentralized hosting,” Elsevier Computer Networks, Vol. 53, No. 11, July 2009, pp. 1830–1845.

18. J. Brownlee, Deep Learning for Time Series Forecasting: Predict the Future with MLPs, CNNs and LSTMs in Python. Machine Learning Mastery, 8 2018.

19. T. Lin, T. Guo, and K. Aberer, “Hybrid neural networks for learning the trend in time series,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2017, pp. 2273–2279.

20. X. Tang, “Large-scale computing systems workload prediction using parallel improved LSTM neural network,” IEEE Access, Vol. 7, 2019, pp. 40 525–40 533.

21. Y. Zhu, W. Zhang, Y. Chen, and H. Gao, “A novel approach to workload prediction using attention-based LSTM encoder-decoder network in cloud environment,” EURASIP Journal on Wireless Communications and Networking, 2019, pp. 1–18.

22. Q. Zhang, L.T. Yang, Z. Yan, Z. Chen, and P. Li, “An efficient deep learning model to predict cloud workload for industry informatics,” IEEE Transactions on Industrial Informatics, Vol. 14, No. 7, 2018, pp. 3170–3178.

23. ***, “PlanetLab,” https://planetlab.cs.princeton.edu, note = [Online; read Octomber-2020],.

24. N.J. Daniel Jacobson, Danny Yuan, “Scryer: Netflixs predictive auto scaling engine,” https://netflixtechblog.com/scryer-netflixs-predictive-auto-scaling-engine-a3f8fc922270, 2013, [Online; read 17-October-2020].

25. ***, “Smartbear_software – Why you can’t talk about microservices without mentioning netflix,” https://smartbear.com/blog/develop/why-you-cant-talk-about-microservices-without-ment/, December 2015, [Online; read Octomber-2020].

26. J. Bar, “New-predictive scaling for EC2, powered by machine learning,” https://aws.amazon.com/blogs/aws/new-predictive-scaling-for-ec2-powered-by-machine-learning/, November 2018, [Online; read Octomber-2020].

27. ***, “Autoscaling – Microsoft_Docs.” https://docs.microsoft.com/en-us/azure/architecture/best-practices/auto-scaling, May 2017, [Online; read 17-Octomber-2020].

28. ***, “Autoscaling groups of instances – Google_Cloud_Docs.” https://cloud.google.com/compute/docs/autoscaler, 2014, [Online; read Octomber-2020].

29. P. Jamshidi, C. Pahl, N.C. Mendonça, J. Lewis, and S. Tilkov, “Microservices: The journey so far and challenges ahead,” IEEE Software, Vol. 35, No. 3, 2018, pp. 24–35.

30. ***, “Spring Cloud Netflix – Spring_Docs,” https://cloud.spring.io/spring-cloud-netflix/reference/html/, [Online; read Octomber-2020].

31. P. Singh, P. Gupta, K. Jyoti, and A. Nayyar, “Research on auto-scaling of web applications in cloud: Survey, trends and future directions,” Scalable Computing: Practice and Experience, Vol. 20, 05 2019, pp. 399–432.

32. A.A. Bankole and S.A. Ajila, “Predicting cloud resource provisioning using machine learning techniques,” in Proceedings of the 26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2013, pp. 1–4.

33. A. Jindal, V. Podolskiy, and M. Gerndt, “Performance modeling for cloud microservice applications,” in Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering, ICPE 19. New York, NY, USA: Association for Computing Machinery, 2019, p. 2532.

34. R. Shumway and D. Stoffer, Time Series Analysis and Its Applications With R Examples, 3rd ed. Springer, 2011.

35. M. Alloghani, D. Al-Jumeily, J. Mustafina, A. Hussain, and A.J. Aljaaf, A Systematic Review on Supervised and Unsupervised Machine Learning Algorithms for Data Science. Cham: Springer International Publishing, 2020, pp. 3–21. [Online]. https://doi.org/10.1007/978-3-030-22475-2_1

36. T.G. Dietterich, “Machine learning for sequential data: A review,” in Structural, Syntactic, and Statistical Pattern Recognition, T. Caelli, A. Amin, R.P.W. Duin, D. de Ridder, and M. Kamel, Eds. Berlin, Heidelberg: Springer, 2002, pp. 15–30.

37. G. Bontempi, S. Ben Taieb, and Y.A. Le Borgne, Machine Learning Strategies for Time Series Forecasting. Springer Berlin Heidelberg, 01 2013, Vol. 138, pp. 62–67.

38. M. Smeets, “Microservice framework startup time on different JVMs,” https://technology.amis.nl/languages/java-languages/microservice-framework-startup-time-on-different-jvms-aot-and-jit/, 2019, [Online; read 26-June-2021].

39. R. Frank, N. Davey, and S. Hunt, “Time series prediction and neural networks,” Journal of Intelligent and Robotic Systems, 2001, pp. 91–103.

40. A. Botchkarev, “Performance metrics (error measures) in machine learning regression, forecasting and prognostics: Properties and typology,” Interdisciplinary Journal of Information, Knowledge, and Management, Vol. 14, 2019, p. 045076.

41. C.D. Lewis, Industrial and business forecasting methods: a practical guide to exponential smoothing and curve fitting. London(U.A.): Butterworth Scientific, 1982.

42. S.L. Ho and M. Xie, “The use of ARIMA models for reliability forecasting and analysis,” Computers and Industrial Engineering, Vol. 35, No. 12, Oct. 1998, p. 213216.

43. A.O. Adewumi and C.K. Ayo, “Comparison of ARIMA and Artificial Neural Networks models for stock price prediction,” Journal of Applied Mathematics, Vol. 2014, 03 2014, pp. 1–7.

44. S. Siami-Namini, N. Tavakoli, and A. Siami Namin, “A comparison of ARIMA and LSTM in forecasting time series,” in Proceedings of 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, pp. 1394–1401.

45. V.R. Prybutok, J. Yi, and D. Mitchell, “Comparison of neural network models with ARIMA and regression models for prediction of Houston’s daily maximum ozone concentrations,” European Journal of Operational Research, Vol. 122, No. 1, 2000, pp. 31–40. [Online]. https://www.sciencedirect.com/science/article/pii/S0377221799000697

46. T.J. Sejnowski, The Deep Learning Revolution. The MIT Press, 2018.

47. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Adaptive Computation and Machine Learning series. MIT Press, 2016. [Online]. https://books.google.ro/books?id=Np9SDQAAQBAJ

48. R.D. Reed and R.J. Marks, Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. Cambridge, MA, USA: MIT Press, 1998.

49. S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, Vol. 9, No. 8, Nov. 1997, p. 17351780.

50. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. USA: Prentice Hall Press, 2009.

51. F. Chollet and et al., “Keras,” https://github.com/fchollet/keras, 2015.

52. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion et al., “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, Vol. 12, 2011, pp. 2825–2830.

53. S. Seabold and J. Perktold, “Statsmodels: Econometric and statistical modeling with Python,” in Proceedings of the 9th Python in Science Conference, 2010.

54. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Machine Learning Research, Y.W. Teh and M. Titterington, Eds., Vol. 9. Chia Laguna Resort, Sardinia, Italy: PMLR, 13–15 May 2010, pp. 249–256. [Online]. https://proceedings.mlr.press/v9/glorot10a.html

55. Y.W. Cheung and K.S. Lai, “Lag order and critical values of the augmented dickey–fuller test,” Journal of Business & Economic Statistics, Vol. 13, No. 3, 1995, pp. 277–280.

56. N. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. Tang, “On large-batch training for deep learning: Generalization gap and sharp minima,” in Proceedings of 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017.

57. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, Vol. 15, No. 56, 2014, pp. 1929–1958.

58. P. Runeson and M. Höst, “Guidelines for conducting and reporting case study research in software engineering,” Empirical Software Engineering, Vol. 14, 2008, pp. 131–164.

59. S. Nayak, B.B. Misra, and H.S. Behera, “Impact of data normalization on stock index forecasting,” International Journal of Computer Information Systems and Industrial Management Applications, Vol. 6, No. 2014, 2014, pp. 257–269.

60. J.D. Rodriguez, A. Perez, and J.A. Lozano, “Sensitivity Analysis of k-Fold Cross Validation in Prediction Error Estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 3, 2010, pp. 569–575.

61. R.R. Bouckaert, “Estimating replicability of classifier learning experiments,” in Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04. New York, NY, USA: Association for Computing Machinery, 2004. [Online]. https://doi.org/10.1145/1015330.1015338

62. M. Huk, K. Shin, T. Kuboyama, and T. Hashimoto, “Random number generators in training of contextual neural networks,” in Proceedings of 13th Asian Conference on Intelligent Information and Database Systems, N.T. Nguyen, S. Chittayasothorn, D. Niyato, and B. Trawiński, Eds. Cham: Springer International Publishing, 2021, pp. 717–730.

©2015 e-Informatyka.pl, All rights reserved.

Built on WordPress Theme: Mediaphase Lite by ThemeFurnace.