Remy Chaput
Remy Chaput
Home
About Me
Projects
Research
Teaching
Light
Dark
Automatic
reinforcement-learning
Multi-objective reinforcement learning: an ethical perspective
Reinforcement learning (RL) is becoming more prevalent in practical domains with human implications, raising ethical questions. …
Timon Deschamps
,
Rémy Chaput
,
Laetitia Matignon
Cite
HAL
Multi-objective reinforcement learning: an ethical perspective
Reinforcement learning (RL) is becoming more prevalent in practical domains with human implications, raising ethical questions. …
Timon Deschamps
,
Rémy Chaput
,
Laetitia Matignon
Cite
HAL
Ethical Smart Grid: a Gym environment for learning ethical behaviours
Paper published in the Journal of Open-Source Software, alongside the source code for our
ethical-smart-grid
simulator. This simulator focuses on ethical behaviours within a Smart Grid, and is based on Gym (Reinforcement Learning).
Clément Scheirlinck
,
Rémy Chaput
,
Salima Hassas
Cite
DOI
HAL
GitHub
Learning multi-value ethical behaviours by combining symbolic judging agents and learning agents
Journal paper published at the French
Artificial Intelligence Open Journal
(Revue Ouverte d’Intelligence Artificielle). This works extends previous works, especially the conference paper published at JFSMA 2021.
Rémy Chaput
,
Jérémy Duval
,
Olivier Boissier
,
Mathieu Guillermin
,
Salima Hassas
Cite
DOI
HAL
Adaptive reinforcement learning of multi-agent ethically-aligned behaviours: the QSOM and QDSOM algorithms
Preprint describing two Reinforcement Learning algorithms (
Q-SOM
and
Q-DSOM
) I have developped. They focus on continuous and multi-dimensional observations and actions, and adaptation to changes in the environment.
Rémy Chaput
,
Olivier Boissier
,
Mathieu Guillermin
Cite
HAL
ArXiv
AJAR: An Argumentation-based Judging Agents Framework for Ethical Reinforcement Learning
Paper presented at the
Autonomous Agents and Multiagent Systems
conference. It presents the
AJAR
framework, which uses argumentation-based judging agents to provide rewards for Reinforcement Learning agents, according to one or several moral values. This “judgment of ethics” is used to nudge the learning agents towards an “ethical behavior”, that is, a behavior aligned with the given moral values.
Benoît Alcaraz
,
Olivier Boissier
,
Rémy Chaput
,
Christopher Leturc
Cite
DOI
HAL
ACM DL
Artificial Moral Advisors: enhancing human ethical decision-making
This paper presents how Artificial Intelligence could be used to help humans in their ethical decision-making tasks.
Marco Tassella
,
Rémy Chaput
,
Mathieu Guillermin
Cite
DOI
HAL
IEEE Xplore
ethical-smart-grid
Smart Grid simulator for Reinforcement Learning focusing on ethical behaviours.
Remy Chaput
Code
Documentation
Approche multi-agent combinant raisonnement et apprentissage pour un comportement éthique
Paper on the ability to use a symbolic reasoning approach to judge neural learning agents, in order to reward them appropriately with respect to their ’ethical’ behavior, combining both approaches in a Hybrid method.
Rémy Chaput
,
Jérémy Duval
,
Olivier Boissier
,
Mathieu Guillermin
,
Salima Hassas
PDF
Cite
Slides
HAL
A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior
Paper presented at the
AI, Ethics, and Society
conference. It presents a novel, hybrid, method to learn “ethical behaviors” by combining symbolic judgments with a Reinforcement Learning algorithm.
Rémy Chaput
,
Jérémy Duval
,
Olivier Boissier
,
Mathieu Guillermin
,
Salima Hassas
Cite
Poster
Slides
DOI
HAL
ACM DL
»
Cite
×