Increasingly numerous AI systems have been deployed from labs to our society, with an important impact over human (daily) lives, especially with respect to ethical considerations. The field of Machine Ethics has proposed several approaches to embed such considerations within the decision-making mechanisms of artificial agents. In this presentation, I briefly summarize the state of the art, focusing particularly on normative systems, before presenting the AJAR framework, which leverages argumentation to judge reinforcement learning agents. Finally, I present an extension towards putting human users in the loop and settling dilemmas.
A talk that I gave at a seminar of the Individual and Collective Reasoning group (ICR – University of Luxembourg) remotely.