Reinforcement learning (RL) is becoming more prevalent in practical domains with
human implications, raising ethical questions. Specifically, multi-objective RL
has been argued to be an ideal framework for modeling real-world problems and
developing human-aligned artificial intelligence. However, the ethical dimension
remains underexplored in the field, and no survey covers this aspect. Hence, we
propose a review of multi-objective RL from an ethical perspective, highlighting
existing works, gaps in the literature, important considerations, and potential
areas for future research.