Wednesday , June 20 2018

Parallel and Distributed Software Assessment in
Multi-Attribute Decision Making Paradigm

1 Economic Studies Academy, Bucharest, Romania
2 I C I Bucharest
(National Institute for R & D in Informatics)

8-10 Averescu Blvd.
011455 Bucharest 1, Romania
3 Technical University of Civil Engineering, Bucharest, Romania

Abstract: Multi-Attribute Decision Making (MADM) theory is a way to obtain good quality assessment for parallel and distributed software. It provides adequate tools to compute a synthetic characterization, named High Performance Computing (HPC) merit, which may be used in operations like software comparisons / rankings / optimizations. The paper presents the general assessment model with its associated assessment problems and a terse and telling case study. The assessment model is described and solved by the Internet mathematical service named OPTCHOICE (MADM modeling and optimal choice problem solving). It provides a multitude of normalization and solving methods generating diverse assessments, but always a global assessment is delivered.

Keywords: Software Assessment, Parallel and Distributed Computing, Multiple Attribute Decision Making, Comparisons / Rankings / Optimizations through the agency of the High Performance Computing Merits.

>>Full text
Marin ANDREICA, Cornel RESTEANU, Romică TRANDAFIR, Parallel and Distributed Software Assessment in Multi-Attribute Decision Making Paradigm, Studies in Informatics and Control, ISSN 1220-1766, vol. 23 (2), pp. 133-142, 2014.

1. Introduction

At the beginning, the unique method to assess the characteristics of any kind of parallel and distributed software [1, 2, 3] was the direct testing. Through the agency of this method, only few characteristics used to be assessed. Therefore, the benchmark must intervene. Primarily, even the benchmarking process has been mainly a manual process. In order to allow this time-consuming and costly analysis process to be automated, a lot of techniques working on the performance indicators were developed [4]. Moreover, general purpose software were developed, the more prominent are PARSEC and SPLASH 2. The Princeton Application Repository for Shared-Memory Computers (PARSEC) is a benchmark suite, representative for next-generation shared-memory programs for chip-multiprocessors, meant to analyze emerging workloads in their complexity (see The SPLASH 2 benchmark belongs to the Computer Architecture and Parallel System Laboratory (CAPSL) from Delaware University, (see and implements modern parallel computation models to study the future generations of high-performance computing systems. The diversity of benchmarking techniques is enriching every day [5, 6] but still indicators which remain to be computed by real testing or by expert assessment.

The paper proposes a method to globally assess the parallel and distributed software by computing a so called HPC merit. This is computed starting from the elementary characteristics of software evaluated by direct testing, benchmarking and experts. The elementary characteristics refer both to source and executable formats. The HPC merit’s computing procedure may be considered as an integration of elementary characteristics to give a synthetic characterization. This shows if the respective program is well realized as parallel and distributed software and well distributed on hardware configuration. It is a number in the [0, 1] interval. As close to 1 is the HPC merit, the better realized is the software – hardware implementation. In principle, every single program of the parallel and distributed software class may be globally assessed. But the main goal is assessing a set of programs with the same functionality. In this case, the merits can stay at the base of comparison / ranking / optimization problems.

In order to make a good assessment for the parallel and distributed software, it is necessary:

  • To consider a parallel and distributed programs set and its characteristics. The programs must belong to the same class, meaning that they realize the same user function but by different software solutions;
  • To consider, for the programs set, all hardware configurations capable to receive them in running. It is possible to operate on a collection of computing elements (scalar machines, multiprocessors, or special-purpose computers) interconnected in one or more homogeneous/heterogeneous networks;
  • To imply, in the assessment process, several experts which are specialized in computer science, mathematics and programming languages.

Thus, the above specifications lead to the conviction that the MADM [7] paradigm is suitable to use in the construction of assessment models and solving the pending problems. Indeed, in this case, it is possible to define the following entities: objects (software set subject to the assessment process), attributes (software’s general and parallelism / distributive elementary characteristics), states of nature (running platforms taken in consideration) [8, 9], experts / decision makers [10] (specialists in algorithms, programming and networking), objects – attributes characteristics matrix with the dimensions determined by the above entities dimensions, and finally, decision makers / states of nature / attributes weights (meaning that the elements in this entities have different importance in the assessment process). Obviously, in this manner, it was considered neither more nor less than a generalized MADM model.

The parallel and distributed software’s assessment is made using a tool named OPTCHOICE [11]. It may be characterized as a pervasive Internet optimization service. An Internet service is pervasive if it is available, in conditions of performance and without delay, to anyone, from any place, at any time and free of charge. Being capable to treat generalized MADM models and being pervasive, it was the best solution for defining and solving parallel and distributed software assessment problems.

In the following, the paper theoretically shows how can define Assessment Models (AMs) in MADM paradigm, how can generate associated Assessment Problems (APs), and practically shows how is possible to handle AMs and APs in the context of a case study. The paper ends with some conclusions.


  1. GRAMA, G., G. KARPIS, V. KUMAR, A. GUPTA, Introduction to Parallel Computing: Design and Analysis of Parallel Algorithms, Addison Wesley, 2003.
  2. ALAGHBAND, G., H. F. JORDAN, Fundamentals of Parallel Processing, Prentice Hall, 2002.
  3. DONGARRA, J., K. MADSEN, J. WASNIEWSKI (Eds), Applied Parallel Computing: State of the Art in Scientific Computing. In: Lecture Notes in Computer Science, Springer; Volume 3732, 1 edition, April 11, 2006.
  4. BOGETOFT, P., Performance Benchmarking, Springer, Series: Management for Professionals, ISBN 978-1-4614-6042-8, 2012.
  5. KAELI, D. (Ed.), Computer Performance Evaluation and Benchmarking, Series: Lecture Notes in Computer Science, Vol. 5419, Subseries: Programming and Software Engineering, ISBN 978-3-540-93798-2, 2009.
  6. MAHANTI, R., J. R. EVANS, Critical Success Factors for Implementing Statistical Process Control in the Software Industry, Benchmarking: An International Journal, vol. 19, issue 3, 2012, pp. 374-394.
  7. YOON, K., C.-L. HWANG, Multiple Attribute Decision Making: An Introduction, SAGE Publications, Thousand Oaks, London, New Delhi, 1995.
  8. EL-REWINI, H., T. G. LEWIS, Scheduling Parallel Program Tasks onto Arbitrary Target Machines, Journal of Parallel and Distributed Computing, vol. 9, June 1990, pp. 138-153.
  9. KRAUTER, K., R. BUYYA, M. MAHESWARAN, A Taxonomy and Survey of Grid Resource Management Systems for Distributed Computing, in Software: Practice and Experience, vol. 32, issue 2, 2002, pp. 135-164.
  10. HWANG, C.-L., M. J. LIN, Group Decision Making under Multiple Criteria, Springer-Verlag, Berlin Heidelberg New York, 1997.
  11. RESTEANU, C., M. ANDREICA, Distributed and Parallel Computing in MADM Domain Using the OPTCHOICE Software, International Journal of Mathematical Models and Methods in Applied Sciences, NAUN (North Atlantic University Union), ISSN: 1998-0140, Issue 3, Volume 1, 2007, pp. 159-167.
  12. LEOPOLD, C., Parallel and Distributed Computing: A Survey of Models, Paradigms and Approaches, LAVOISIER S.A.S., 2001.
  13. ROS, A. (Ed.), Parallel and Distributed Computing, Publisher: InTech, ISBN 978-953-307-057-5, January 01, 2010.
  14. HUGHES, C., T. HUGHES, Parallel and Distributed Programming Using C++, Published by Addison-Wesley Professional, ISBN-10: 0-13-101376-9, ISBN-13: 978-0-13-101376-6, August 25, 2003.
  15. ZAVADSKAS, E. K., Z. TURSKIS, R. VOLVAÄŒIOVAS, S. KILDIENE, Multi-criteria Assessment Model of Technologies, in: Studies in Informatics and Control, ISSN 1220-1766, vol. 22(4), 2013, pp. 249-258.
  16. BOKHARI, S. H. Assignment Problems in Parallel and Distributed Computing, Kluwer Academic Publishers, 1987.
  17. BAKER, M., R. BUYYA, D. LAFORENZA, Grids and Grid Technologies for Wide-area Distributed Computing, In Software: Practice and Experience, Volume 32, Issue 15, 2002,   pp. 1437-1466.
  18. BUYYA, R., K. BUBENDORFER (Eds), Market-Oriented Grid and Utility Computing, Wiley, ISBN: 978-0-470-28768-2, November 2009.