Remy Chaput
Remy Chaput
Home
About Me
Projects
Research
Teaching
Light
Dark
Automatic
machine-ethics
Multi-objective reinforcement learning: an ethical perspective
Reinforcement learning (RL) is becoming more prevalent in practical domains with human implications, raising ethical questions. …
Timon Deschamps
,
Rémy Chaput
,
Laetitia Matignon
Cite
HAL
Multi-objective reinforcement learning: an ethical perspective
Reinforcement learning (RL) is becoming more prevalent in practical domains with human implications, raising ethical questions. …
Timon Deschamps
,
Rémy Chaput
,
Laetitia Matignon
Cite
HAL
Learning to identify and settle dilemmas through contextual user preferences
This paper presents a novel Multi-Objective Reinforcement Learning approach to settle dilemmas, by putting humans in the loop.
Rémy Chaput
,
Laetitia Matignon
,
Mathieu Guillermin
PDF
Cite
Slides
DOI
HAL
IEEE Xplore
Ethical Smart Grid: a Gym environment for learning ethical behaviours
Paper published in the Journal of Open-Source Software, alongside the source code for our
ethical-smart-grid
simulator. This simulator focuses on ethical behaviours within a Smart Grid, and is based on Gym (Reinforcement Learning).
Clément Scheirlinck
,
Rémy Chaput
,
Salima Hassas
Cite
DOI
HAL
GitHub
Learning multi-value ethical behaviours by combining symbolic judging agents and learning agents
Journal paper published at the French
Artificial Intelligence Open Journal
(Revue Ouverte d’Intelligence Artificielle). This works extends previous works, especially the conference paper published at JFSMA 2021.
Rémy Chaput
,
Jérémy Duval
,
Olivier Boissier
,
Mathieu Guillermin
,
Salima Hassas
Cite
DOI
HAL
Adaptive reinforcement learning of multi-agent ethically-aligned behaviours: the QSOM and QDSOM algorithms
Preprint describing two Reinforcement Learning algorithms (
Q-SOM
and
Q-DSOM
) I have developped. They focus on continuous and multi-dimensional observations and actions, and adaptation to changes in the environment.
Rémy Chaput
,
Olivier Boissier
,
Mathieu Guillermin
Cite
HAL
ArXiv
AJAR: An Argumentation-based Judging Agents Framework for Ethical Reinforcement Learning
Paper presented at the
Autonomous Agents and Multiagent Systems
conference. It presents the
AJAR
framework, which uses argumentation-based judging agents to provide rewards for Reinforcement Learning agents, according to one or several moral values. This “judgment of ethics” is used to nudge the learning agents towards an “ethical behavior”, that is, a behavior aligned with the given moral values.
Benoît Alcaraz
,
Olivier Boissier
,
Rémy Chaput
,
Christopher Leturc
Cite
DOI
HAL
ACM DL
ethical-smart-grid
Smart Grid simulator for Reinforcement Learning focusing on ethical behaviours.
Remy Chaput
Code
Documentation
PhD Thesis
PhD thesis on
Learning behaviours aligned with moral values in a multi-agent system: guiding reinforcement learning with symbolic judgments
, realized at the LIRIS lab, under the supervision of Professor Salima Hassas (LIRIS), Professor Olivier Boissier (LaHC), and Dr. Mathieu Guillermin (UCLy).
Rémy Chaput
PDF
Cite
Slides
Source Document
HAL
Approche multi-agent combinant raisonnement et apprentissage pour un comportement éthique
Paper on the ability to use a symbolic reasoning approach to judge neural learning agents, in order to reward them appropriately with respect to their ’ethical’ behavior, combining both approaches in a Hybrid method.
Rémy Chaput
,
Jérémy Duval
,
Olivier Boissier
,
Mathieu Guillermin
,
Salima Hassas
PDF
Cite
Slides
HAL
»
Cite
×