Explanations: What does it mean for humans, for machines, for man-machines interactions?

Abstract

The XAI concept was launched by the DARPA in 2016 in the context of model learning from data with deep learning methods. Although the machine learning community quickly took up on the topic, other communities have also included explanation in their research agenda (e.g. Case Based Reasoning, Planning, Decision Support, Emerging Systems, Robotics, Internet of Things). The question of explanation, which is at the center of philosophical research works, has been revisited during the last decades. The humanities community insists on the fact that explanation is above all a process that develops in the context of the search for explanation and cannot be completely defined a priori. In this contribution, we propose 1) to broaden the question of explanation to any type of situation in which users exploit the possibilities of decision support agents for their own decisions, in the context of their task, and within the framework of their activities and responsibilities and 2) to consider an instrumentation of digital devices, able to manage dynamic explanation agents associated to corresponding decision support agents. We denote this evolution “UXAI” (User eXplainable Artificial Intelligence) because we consider that users should be the main actors in the dynamics of any explanation process.

Publication
Explainable Agency in Artificial Intelligence Workshop - 35th AAAI Conference on Artificial Intelligence - A Virtual Conference, February 2-9, 2021