Current Issue

Studies in Informatics and Control
Vol. 33, No. 4, 2024

Implementing Federated Learning and Explainability Techniques in Regression Models to Increase Transparency and Reliability*

José RIBEIRO, Ricardo SANTOS, Cesar ANALIDE, Fábio SILVA
Abstract

Interpreting and explaining machine learning models is fundamental for ensuring their transparency and credibility. Explainability techniques such as SHAP and LIME have been developed for this purpose, allowing a deeper understanding of the decisions of AI models. This extended study explores the implementation of a federated learning system for further improving machine learning algorithms. Federated learning allows model training on decentralised data while preserving data privacy and security. The aim of using methods such as SHAP and LIME is to increase the transparency of AI systems and user trust in these systems. These approaches were first implemented in an initial project focused on predictions related to the production time and project management in industry. The obtained results show that federated learning, combined with explainability methods, not only improves the forecast accuracy, but also strengthens the trust among stakeholders. A possible future direction would be to also apply these strategies to other processes in the field of Industry 4.0 and manufacturing, in order to promote the adoption of more comprehensible and innovative intelligent systems.


*This article represents an extension of the conference paper: "Exploring Transparency in Decisions of Artificial Neural Networks for Regression", presented at the WorldCIST'24 Conference.

Keywords

Machine Learning, SHAP, LIME.