Explainable AI (XAI)

Explainable AI (XAI)

Explainable AI (XAI)

In the meantime, methods from the field of artificial intelligence (AI) are being used in more and more fields of application, e.g. to recognise objects in images or to process speech. In some cases, they already surpass the performance of humans. However, the results of currently used AI systems are often not comprehensible, e.g. why a certain object was recognised in an image. This is a considerable disadvantage, especially in safety-critical fields of application, such as autonomous vehicles or in the medical field, and limits the further spread of AI systems accordingly.

Within the framework of Explainable Artificial Intelligence (XAI), one is therefore concerned with explaining the results of AI systems in a form that is understandable to humans. This ability is an important prerequisite for increasing a user's trust in the AI system. It also makes it possible to better assess the strengths and weaknesses of the system. (Fraunhofer Institute for Scientific and Technical Trend Analysis INT)

Explainable Aritificial Intelligence (XAI) thus refers to the endeavour to make insights and predictions gained through Machine Learning (ML) (this also includes neural networks) explainable and verifiable. In the course of digitalisation, XAI is an important building block for opening the much-described "black box", both to improve quality in manufacturing and to gain the trust of users and customers in AI-based decision-making (1). In addition, the EU's GDPR (General Data Protection Regulation) also contains regulations that could make the use of XAI necessary as soon as personal data are processed (2).

Applications of XAI in the industrial context, for example, are driven by the need for trust - in the sense that costly decisions are to be made by an ML model - and the need to better understand certain phenomena - predicting machine failures and then improving them through corresponding data (3).

Due to the field of application, many ML models focus on image or sensor data, as this information is relatively easy to obtain or is already collected. Recent research shows that it is possible to use sensor data from complex manufacturing facilities as training data. The resulting models are able to predict or recognise i) time-to-failure (TTF), ii) equipment condition, and iii) TTF intervals better than human experts (Jalali et al.).

In the field of image processing, there has been related research for quite some time that aims to identify the image regions that are used for classification on the part of an ML model. (Ribeiro et al. 2016) is one of the fundamental works here; the authors also provide an open source tool, which, however, according to their own information is made for text or table data (4).

Explainable AI is thus different from traditional machine learning techniques, where developers often cannot understand why the system arrived at a particular decision (5).

Sources:

(1):Cf: https://www.forbes.com/sites/forbestechcouncil/2021/02/22/why-explainability-is-the-next-step-for-ai-in-manufacturing/?sh=70e798761517, retrieved 31.8.2021; and https://www.pwc.co.uk/services/risk/insights/explainable-ai.html, retrieved 1.9.2021
(2):https://www.dsgvo-portal.de/gdpr_recital_71.php, retrieved 1.9.2021
(3):https://ercim-news.ercim.eu/en116/r-i/understandable-deep-neural-networks-for-predictive-maintenance-in-the-manufacturing-industry, retrieved 1.9.2021
(4):https://github.com/marcotcr/lime, retrieved 1.9.2021
(5):https://www.nextmsc.com/report/explainable-ai-market, retrieved 05.10.2021.