Optimization of the Size of Thread Pool in Runtime Systems to Enterprise Application Integration: a Mathematical Modelling Approach
DOI:
https://doi.org/10.5540/tema.2019.020.01.169Keywords:
Enterprise application integration, multithread programming, runtime system, mathematical modellingAbstract
Companies seek technological alternatives that provide competitiveness for their business processes. One of them is integration platforms, software tools that build integration solutions, which allow the different applications that make up the software ecosystem to work synchronously and that new applications or functionalities be incorporated with the least impact in the existing ones. The runtime system is the component of the integration platform responsible for managing the computational resources that run the integration solution. Among these
The performance of the runtime systems is directly related to the number of threads available to run the integration solution, but scaling the number of threads that provide a shorter response time is a challenge for software engineers. If this quantity is undersized, it may cause a delay in the execution; if it is overestimated, it could cause a waste of computational resources. This article presents a mathematical model, defined by differential equations, that establishes the optimum number of threads, which maximizes the expected performance gain by minimizing the execution time of the integration solution. In addition, it presents the mathematical model application, which assists the analysis of the expected gain in different architecture scenarios and quantity of threads.
References
R. Z. Frantz, A. M. R. Quintero, and R. Corchuelo, “A domain-specific language to design enterprise application integration solutions,” International Journal of Cooperative Information Systems, vol. 20, no. 02, pp. 143–176, 2011.
M. A. Suleman, M. K. Qureshi, and Y. N. Patt, “Feedback-driven threading: Power-efficient and high-performance execution of multi-threaded workloads on CMPs,” ACM SIGARCH Computer Architecture News, vol. 36, no. 1, pp. 277– 286, 2008.
C. Intel, “Threading methodology: Principles and practices.” https://software. intel.com/ en-us/ articles/ threading - methodology - principles- and - practice/, April 2010. Last accessed on 01/10/2018.
R. van der Pas, “The OMPlab on sun systems.” http://www.compunity.org/ events/ upcomingevents/ iwomp2007/ sw/iwomp2007 omplab sun v3. pdf, June 2007. Last accessed on 01/14/2018.
C. Intel, “Get faster performance for many demanding business applications.” https://www.intel.com/ content/www/ us/en/ architecture-and-technology/ hyper- threading/ hyper-threading-technology.html/, January 2018. Last accessed on 01/12/2018.
L. Dagum and R. Menon, “OpenMP: An industry-standard API for sharedmemory programming,” IEEE Computational Science and Engineering, vol. 5, no. 1, pp. 46–55, 1998.
D. Dossot, J. D’Emic, and V. Romero, Mule in action. Manning Publications Co., 2014.
C. Ibsen and J. Anstey, Camel in action. Manning Publications Co., 2010.
M. Fisher, J. Partner, M. Bogoevice, and I. Fuld, Spring integration in action. Manning Publications Co., 2014.
L. M. Surhone, M. T. Timpledon, and S. F. Marseken, Petals EBS. Betascript Publishing, 2010.
K. Indrasiri, Introduction to WSO2 ESB. Springer, 2016.
R. Z. Frantz and R. Corchuelo, “A software development kkit to implement integration solutions,” in Proceedings of the 27th Annual ACM Symposium on Applied Computing, pp. 1647–1652, 2012.
G. Hohpe and B. Woolf, Enterprise integration patterns: Designing, building, and deploying messaging solutions. Addison-Wesley Professional, 2004.
I. Hernández, S. Sawicki, F. Roos-Frantz, and R. Z. Frantz, “Cloud configuration modelling: A literature review from an application integration deployment perspective,” Procedia Computer Science, vol. 64, pp. 977–983, 2015.
D. C. Schmidt, “Evaluating architectures for multithreaded object request brokers,” Communications of the ACM, vol. 41, no. 10, pp. 54–60, 1998.
O. Agesen, D. Detlefs, A. Garthwaite, R. Knippel, Y. S. Ramakrishna, and D. White, “An efficient meta-lock for implementing ubiquitous synchronization,” Sigplan Notices, vol. 34, no. 10, pp. 207–222, 1999.
J. Korinth, D. de la Chevallerie, and A. Koch, “An open-source tool flow for the composition of reconfigurable hardware thread pool architectures,” in Proceedings of the IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, pp. 195–198, 2015.
W. Dawoud, I. Takouna, and C. Meinel, “Elastic VM for rapid and optimum virtualized resources allocation,” in 5th International DMTF Academic Alliance Workshop on Systems and Virtualization Management: Standards and the Cloud, pp. 1–4, 2011.
D. Xu and B. M.Bode, “Performance study and dynamic optimization design for thread pool systems,” in Proceedings of the International Conference on Computing, Communications, and Control Technologies, 2004.
I. Pyarali, M. Spivak, R. Cytron, and D. C. Schmidt, “Evaluating and optimizing thread pool strategies for realtime CORBA,” in Proc. of the ACM SIGPLAN Workshop on Language, Compiler and Tool Support for Embedded Systems, pp. 214–222, 2000.
A. da Silva Dias, L. H. V. Nakamura, J. C. Estrella, R. H. C. Santana, and M. J. Santana, “Providing IaaS resources automatically through prediction and monitoring approaches,” IEEE Symposium on Computers and Communications, pp. 1–7, 2014.
E. F. Coutinho, F. R. de Carvalho Sousa, P. A. L. Rego, D. G. Gomes, and de José Neuman de Souza, “Elasticity in cloud computing: a survey,” Annals of Tecommunications - annales des télécommunications, vol. 70, no. 7, pp. 289– 309, 2015.
Y. Ling, T. Mullen, and X. Lin, “Analysis of optimal thread pool size,” ACM SIGOPS Operating Systems Review, vol. 34, no. 2, pp. 42–55, 2000.
M. D. Syer, B. Adams, and A. E. Hassan, “Identifying performance deviations in thread pools,” in Proceedings of the 27th IEEE International Conference on Software Maintenance, pp. 83–92, 2011.
H. Linfeng, G. Yuhai, and W. Juyuan, “Design and implementation of highspeed server based on dynamic thread pool,” in Proceedings of the IEEE 13th International Conference on Electronic Measurement and Instruments, pp. 442–445, 2017.
Y. Ding, M. Kandemir, P. Raghavan, and M. J. Irwin, “Adapting application execution in CMPs using helper threads,” Journal of Parallel and Distributed Computing, vol. 69, no. 9, pp. 790–806, 2009.
J. Li and J. Martinez, “Dynamic power-performance adaptation of parallel computation on chip multiprocessors,” in Proceedings of the 12th International Symposium on High-Performance Computer Architecture, pp. 77–87, 2006.
K. Singh, M. CurtisMaury, S. A. McKee, F. Blagojević, D. S. Nikolopoulos, B. R. de Supinski, and M. Schulz, “Comparing scalability prediction strategies on an SMP of CMPs,” in Proceedings of the 16th International Euro-Par Conference on Parallel Processing: Part I, pp. 143–155, 2010.
M. Bhadauria and S. A. McKee, “Optimizing thread throughput for multithreaded workloads on memory constrained CMPs,” in Proceedings of the 5th Conference on Computing Frontiers, pp. 119–128, 2008.
J. Nieplocha, A. Márquez, J. Feo, D. Chavarría-Miranda, G. Chin, C. Scherrer, and N. Beagley, “Evaluating the potential of multithreaded platforms for irregular scientific computations,” in Proceedings of the 4th International Conference on Computing Frontiers, pp. 47–58, 2007.
J. Schwarzrock, A. Lorenzon, P. Navaux, A. Beck, and E. P. de Freitas, “Potential gains in EDP by dynamically adapting the number of threads for OpenMP applications in embedded systems,” in VII Brazilian Symposium on Computing Systems Engineering, pp. 79–85, 2018.
K. B. Ferreira, P. Bridges, and R. Brightwell, “Characterizing application sensitivity to OS interference using kernel-level noise injection,” in Proceedings of the ACMIEEE Conference on Supercomputing, pp. 1–12, 2008.
E. Ebrahimi, C. J. Lee, O. Mutlu, and Y. N. Patt, “Fairness via source throttling a configurable and high-performance fairness substrate for multi-core memory systems,” Sigplan Notices, vol. 45, no. 3, 2010.
D. Tsafrir, Y. Etsion, D. G. Feitelson, and S. Kirkpatrick, “System noise, OS clock ticks, and fine-grained parallel applications,” in Proceedings of the 19th Annual International Conference on Supercomputing, pp. 303–312, 2005.
Q. Wu, M. Martonosi, D. W. Clark, V. J. Reddi, D. Connors, Y. Wu, J. Lee, and D. Brooks, “A dynamic compilation framework for controlling microprocessor energy and performance,” in Proceedings of the 38th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 271–282, 2005.
K. K. Pusukuri, R. Gupta, and L. N. Bhuyan, “Thread reinforcer dynamically determining number of threads via os level monitoring,” in Proceedings of the IEEE International Symposium on Workload Characterization, pp. 116–125, 2011.
R. Kumar, D. M. Tullsen, P. Ranganathan, N. P. Jouppi, and K. I. Farkas, “Single-ISA heterogeneous multi-core architectures for multithreaded workload performance,” in Proceedings of the 31st Annual International Symposium on Computer Architecture, pp. 64–76, 2004.
K. Agrawal, Y. He, W. J. Hsu, and C. E. Leiserson, “Adaptive scheduling with parallelism feedback,” in Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 100–109, 2006.
R. Thekkath and S. J. Eggers, “Impact of sharing-based thread placement on multithreaded architectures,” SIGARCH Computer Architecture News, vol. 22, no. 2, pp. 176–186, 1994.
J. Lee, H. Wu, M. Ravichandran, and N. Clark, “Thread tailor: Dynamically weaving threads together for efficient, adaptive parallel applications,” ACM SIGARCH Computer Architecture News, vol. 38, no. 3, pp. 270–279, 2010.
S. Saini, J. Chang, R. Hood, and HaoqiangJin, “A scalability study of columbia using the NAS parallel benchmarks,” Computational Methods in Science and Technology, vol. SI(1), pp. 33–45, 2006.
C. Jung, D. Lim, J. Lee, and S. Han, “Adaptive execution techniques for SMT multiprocessor architectures,” in Proceedings of the Tenth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 236–246, 2005.
S. Zhuravlev, S. Blagodurov, and A. Fedorova, “Addressing shared resource contention in multicore processors via scheduling,” Sigplan Notices, vol. 45, no. 3, pp. 129–142, 2010.
A. Lorenzon, M. Cera, and A. Beck, “Investigating different general-purpose and embedded multicores to achieve optimal trade-offs between performance and energy,” Journal of Parallel and Distributed Computing, vol. 95, no. C, pp. 107–123, 2016.
M. Curtis-Maury, J. Dzierwa, C. D. Antonopoulos, and D. S. Nikolopoulos, “Online power-performance adaptation of multithreaded programs using hardware event-based prediction,” in Proceedings of the 20th Annual International Conference on Supercomputing, pp. 157–166, 2006.
H. Zhou, L. S. Powers, and J. Roveda, “Increase the concurrency for multicore systems through collision array based workload assignment,” in Proceedings International Conference on Information Science, Electronics and Electrical Engineering, vol. 2, pp. 1209–1215, 2014.
A. Tanenbaum and H. Bos, Modern operating systems. Pearson Inc., 2015.
H. Schildt and D. Coward, The Complete Reference, Tenth Edition. McGrawHill Education, 2017.
N. D. Singpurwalla and S. P. Wilson, Statistical methods in software engineering: reliability and risk. Springer Science & Business Media, 2012.
Downloads
Published
How to Cite
Issue
Section
License
Copyright
Authors of articles published in the journal Trends in Computational and Applied Mathematics retain the copyright of their work. The journal uses Creative Commons Attribution (CC-BY) in published articles. The authors grant the TCAM journal the right to first publish the article.
Intellectual Property and Terms of Use
The content of the articles is the exclusive responsibility of the authors. The journal uses Creative Commons Attribution (CC-BY) in published articles. This license allows published articles to be reused without permission for any purpose as long as the original work is correctly cited.
The journal encourages Authors to self-archive their accepted manuscripts, publishing them on personal blogs, institutional repositories, and social media, as long as the full citation is included in the journal's website version.