The need to imbue Artificial Intelligence algorithms with ethical considerations is more and more present. Combining reasoning and learning, this paper proposes a hybrid method, where judging agents evaluate the ethics of learning agents’ behavior. The aim is to improve the ethics of their behavior in dynamic multi-agent environments. Several advantages ensue from this separation: possibility of co-construction between agents and humans; judging agents more accessible for non-experts humans; adoption of several points of view to judge the same agent, producing a richer feedback. Experiments on energy distribution inside a Smart Grid simulator show the learning agents’ ability to comply with judging agents’ rules, including when they evolve.