Artificial Intelligence (AI) is profoundly changing society, as well as the way we relate with one another and the world around us. More recently, AI is increasingly being used to predict individuals’ attitudes, behaviors, and preferences in a range of commercial and legal applications. This practice, however, is not without risks. With its increase in use and popularity, the number of scenarios in which human lives end up being affected, not to say entrusted, to artificial intelligence programs is rising, and thus raises the fundamental need for these situations to be ethical. That is why that it is not possible to imagine any legal application for AI without a deep discussion about the ethics and fairness of its agents and models.
The importance of this concern even led the European Commission for the Efficiency of Justice to publish the European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment.
Nevertheless, there have been many cases of “artificial intelligence gone wrong”. Highlight cases, such as machine learning models that show female searchers ads for low-wage jobs compared to male users, search engines that struggle to differentiate human images from those of gorillas, chatbot posting inflammatory and racist tweets after just one day, the crime risk analysis tool against recidivism predictions showing implicit racial biases, Amazon’s facial recognition falsely matching 28 members of Congress with mugshots, and several other examples on an ever-increasing list.
This workshop aims to discuss bias, ethics and fairness in the design of trustworthy AI systems. We invite AI&L researchers, computer scientists and legal researchers to join us. Topics include but are not limited to bias in machine learning, algorithmic fairness, explainable and interpretable machine learning, ethics in argumentation theories, accountability, legal design and visual modeling of the Law, etc.