Saturday , June 23 2018

Decision Analysis in Management of
Information Systems Incidents

1 Equipe MGSI – Université de Paris8 / LISMMA
140, rue de la nouvelle France, 93100 Montreuil, France,

2 Automation and Computer Science Department,
Valahia University of Targoviste,

Abstract: Distributed systems are characterized by heterogeneity and also by multiple applications often interdependent whose programming is done by separate teams with no communications between them. Dysfunctions and problems that occur in these systems have multiple and important consequences. Difficulties encountered during discovering origins and causes of these problems are proportional with user information quality demand. In this paper a method to manage incidents in information systems using decision analysis is proposed.

Keywords: Decision analysis, incidents management, case basis reasoning, influence diagrams.

>>Full Text
D. TCHOFFA, L. DUTA, A. El MHAMEDI, Decision Analysis in Management of Information Systems Incidents, Studies in Informatics and Control, ISSN 1220-1766, vol. 22 (2), pp. 123-132, 2013.


In actual economic situation, enterprises must be more effective and innovating in order to create added value for their customers. Pressed by the emergency of results, they are forced to improve the efficiency and the productivity of their employees. Some enterprises cannot undergo the manual and reduce process inefficiency. It is the moment to transform their processes more reactively facing their customers, to improve their efficiency, to reduce the costs and to limit the risks.

While using their best practice and experience in information and processes management, enterprises can adopt a new approach in their work. Thus, treating of industrial system’s dysfunctions, incidents (or risk) management process is undergoing and it imposes itself as one that asks for more ingenuity in design and setup of an efficient method to minimize the maintenance costs of industrial applications.

In all sectors of activities, the impact of cascade incidents became one of most elevated budgets. The continuous interest growth for finding efficient dysfunctions management methods is justified by the trained teams, more effective tools of production and huge investments in decision aid tools so as to minimize incidents occurrence.

Using a tool for incidents management allows minimizing delays between treatment phases, to reduce services unavailability, and costs dues to the loss of service. However, methods used for this management are various. In literature, the main idea is to integrate intelligence in incidents management systems, either using a predefined formal set of rules (Rule Based Reasoning – RBR) or set of cases that summarizes and diagnoses previous dysfunctions and gives resolutions (Case Based Reasoning – CBR). A RBR diagnosis system implies a knowledge base that memorizes in a formal language the specific knowledge for the occurred problems and the operators reasoning their current operations. The major inconvenience of a RBR system is that the rules are established only for specific problems and are hardly maintainable (Luger, 2005). A CBR system finds, adapts and reuses old solutions to already met problems to solve new occurred problems or to criticize new solutions.

Taking into account previous considerations, one aims to conceive a tool, not only for incidents statistical analysis, but also to assist operators in finding solutions at the time of the procedures.

Our approach includes automatic incidents treatment and it is close to CBR architecture. A decision aid tool that establishes influences and causes between actions and incidents is presented in this paper.


  1. BOY, G. A., The Orchestra: A Conceptual Model for Function Allocation and Scenario based Engineering in Multi-Agent Safety-Critical Systems, Proceedings of the European Conference on Cognitive Ergonomics, Finland, 2009, pp. 187-193
  2. BOY, G. A., Handbook of Human-Machine Interaction: A Human-Centered Design Approach. Ashgate, Eds, 2011, U.K
  3. BOY, G. A., What do we mean by Human-Centered Design of Life-Critical Systems, IOS Press, 2012.
  4. DHILLON, B. S., Safety and Human Error in Engineering Systems. CRC Press, Taylor and Francis Group, Boca Raton, 2012
  5. FACTOR, R., D. MAHALEL, G. YAIR, The social accident: a theoretical model and a research agenda for studying the influence of social and cultural characteristics on motor vehicle accidents, Accident Analysis and prevention, vol. 39(5), 2007, pp. 914-921.
  6. FILIP, F. G., Decizie asistata de calculator: decizii, decidenti, metode si instrumente de baza. Ed. Expert si Ed. Tehnica, Bucuresti, 2002
  7. FILIP, F. G., Decision Support and Control for Large-Scale Complex Systems. Annual Reviews in Control, vol. 32(1), 2008, pp. 61-70
  1. GODY, A., Human-Machine Interface. PCSR – Sub-chapter 18.1, UKEPR-0002-181 Issue 05, 2011.
  2. GRANGIER, E., Le bug: une esthétique de l’accident. PhD Thesis, Université Paris 1 Panthéon – Sorbonne, 2006.
  3. GROTE, G., C. RYSER, T. WÄFLER, A. WINDISCHER, S. WEIK, KOMPASS: A method for complementary function allocation in automated work systems, International Journal of Human-Computer Studies, vol. 52, 2000, pp. 267-287.
  4. HOLLNAGEL, E., Barriers and Accident Prevention. Ashgate Eds, Hampshire Burlington, England, 2004.
  5. HOLLNAGEL, E., D. D. WOODS, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. CRC Press, Boca Raton, FL, 2005.
  6. HOLLNAGEL, E., The Four Cornerstones of Resilience Engineering. In: Nemeth, C., Hollnagel, E., Dekker, S. (Eds.), Resilience engineering perspectives, vol. 2. Preparation and restoration. Ashgate, Farnham, UK, 2009, pp. 117-133.
  7. ITIL, Great Britain, London, The Stationery Office. 2007.
  8. KOSSIAKOFF, A., W. N. SWEET, S. J. SEYMOUR, S. M. BIEMER, Systems Engineering – Principles and Practice Wiley, 2011, pp. 11-12.
  9. LANGEVIN, S., B. JOSEPH, J. BRENKLE, S. STRAUSSBERGER, B. GUIOST, J-Y. LANTES, G. BOY, B. N’KAOUA, Human-centered Design Methodology: An Example of Application with UAVS Mission, Proceedings of the first conference on Humans Operating Unmanned Systems (HUMOUS’08), Brest, France, 2008.
  10. LEVESON, N., A New Accident Model for Engineering Safe Systems, Safety science no. 42, 2004, pp. 237-270.
  11. LOFQUIST, E. A., A. GREVE, U. H. OLSSON, Modeling Attitudes and Perceptions as Predictors for Changing Safety Margins during Organizational Change. Safety Science, vol. 49(3), 2011, pp. 531-541.
  12. MANNING, S, D., C. E. RASH, P. A. LeDUC, R. K. NOBACK, J. McKEON, The Role of Human Causal Factors in U.S. Army Unmanned Aerial Vehicle Accidents (DOT/FAA/AM-04/24). Technical report. Federal Aviation Administration, Oklahoma City. 2004.
  13. MARGESON, B. The Human Side of Data Loss. Disaster Recovery Journal, vol. 16(2), 2003, pp. 48-48.
  14. MARLING, C. R., G. J. PETOT, L. S. STERLING, Integrating Case-Based and Rule-Based Reasoning to Meet Multiple Design Constraints. Computational Intelligence, vol. 15(3), 1999, pp. 308-332.
  15. MEEDENIYA I., A. ALETI, B. BUHNOVA, Redundancy Allocation in Automotive Systems using Multi-objective Optimization. In Symposium of Avionics/Automotive Systems Engineering (SAASE’09), San Diego, CA, 2009.
  16. METZGER, R. C., Debugging by Thinking: A Multidisciplinary Approach, DigitalPress, 2003.
  17. MORRILL, H., M. BEARD, D. CLITHEROW, Achieving Continuous Availability of IBM Systems Infrastructures. IBM Systems Journal, vol. 47(4), 2008, pp. 493-503.
  18. NRC (2007 and 2010). Standard Review Plan, Chapter 18 – Human Factors Engineering (NUREG-0800) and Human Factors Engineering Program Management Plan (NP-TR-0610-290-NP). Washington, DC: U.S. Nuclear Regulatory Commission.
  19. SANTOS-REYES, J., A. N. BEARD, A Systemic Analysis of the Edge Hill Railway Accident. Accid. Anal. Prev. 41(6), 2009, pp. 1133-1144.
  20. O’CALLAGHAN, K., S. MARIAPPANADAR, Incident Manager. Meet PHiL CROSS. IT Service Management Forum Australia, Informed Intelligence, 2010, p. 7.
  21. O’HARA, J., J. HIGGINS, W. BROWN, J. PERSENSKY, P. LEWIS, J. KRAMER, A. SZABO, M. BOGGI, Human Factors Considerations with Respect to Emerging Technology in Nuclear Power Plants (NUREG/CR-6947). Washington, DC: U.S. Nuclear Regulatory Commission 2008.
  22. Palisade, Software for risk and decision analysis (, accessed on 07.12.2012)
  23. REASON, J., Managing the Risks of Organizational Accidents. Proceedings of The 5th Risk Management Conference RMC 2004, Cleveland, October 27, 2004.
  24. RODRIGUES, R., P. DRUSCHEL, Peer to Peer Systems, Communications of the ACM, vol. 53(10), 2010, p. 81.
  25. ROLLENHAGEN, C. J. W., J. LUNDBERG, E. HOLLNAGEL, The Context and Habits of Accident Investigation Practices: A Study of 108 Swedish Investigators. Safety Science, vol. 48(7), 2010, pp. 859-867.
  26. TCHOFFA, D., A partir d’une étude ethno-méthodologique du “bug de l’an 2000”, Intégration d’un processus d’amélioration continue de gestion des bugs, incidents, problèmes, changements et configurations pour les systèmes d’informations critiques. Thèse de doctorat, Université Paris 8 St-Denis, 2006.
  27. TCHOFFA. D., L. DUTA, A. El MHAMEDI, Decision analysis in Management of Industrial Incidents, Proceedings of the 14th IFAC Symposium on Information Control Problems in Manufacturing, Bucharest, Romania, 2012, pp 951-955.
  28. TCHOFFA, D., A. El MHAMEDI, A Decision Aid Tool in Software Maintenance, Proceedings of the 14th IFAC Symposium on Information Control Problems in Manufacturing, Bucharest, Romania, 2012, pp. 946-950.