Home

The Individual and Collective Reasoning Group (ICR) is an interdisciplinary research team at the University of Luxembourg which is driven by the insight that intelligent systems (like humans) are characterized not only by their individual reasoning capacity, but also by their social interaction potential. Its overarching goal is to develop and investigate comprehensive formal models and computational realizations of individual and collective reasoning and rationality.

ICR is anchored in the Lab for Intelligent and Adaptive Systems (ILIAS) of the Department of Computer Science (DCS), and involved in the Interdisciplinary Centre for Security, Reliability and Trust (SnT). The group, which is led by Leon van der Torre, currently counts more than 15 researchers and is strongly engaged in international cooperation.

Our areas are normative multi-agent systems, autonomous cognitive agents, computational social choice, and the foundations of logic-based knowledge representation and reasoning.


Upcoming ICR Events

  • Dina Babushkina and Athanasios Votsis (ICR Seminar)
    05.12.2022 - 4 p.m.

    Title: Hybrid decision making: How are we to normatively approach it?

    Abstract: In our talk, we will focus on the phenomenon of hybrid (AI + human agent) decision making. We aim to problematize the integration of machine output into the human reasoning process that supports decisions about action. We approach this problem from the viewpoint of two normative disciplines: ethics (normativity of action, including the action of using machine output) and epistemology (normativity of knowledge). The overarching goal of our inquiry is to understand the proper ways of integrating AI output into the decision process. It is our premise that, in order to answer this ethical question, we must first arrive at a realistic understanding of what type of epistemic product AI output is, as well as its limitations. A significant part of this is accounting for the discrepancy between AI epistemology and human epistemology and deducing the appropriate role for AI epistemic processes and products in the reasoning process of the human (as a basis for normative science of hybrid reasoning). The interconnection of these two normative domains lead to a conceptualization of epistemo-ethical constraints on the hybrid decision making process.

    You can join live or via WebEx.

  • Liuwen Yu (ICR Seminar)
    12.11.2022 - 4 p.m.

    Title: Legal Reasoning through factor-based reasoning and argumentation in the context of explainability

    Abstract:The subject of my research is the explainability of AI. In particular, how to ensure, from a legal and technical perspective, the respect of the right to an explanation. The aim is to collect data from multiple legal sources and study how automated legal decisions and predictions can be explained through legal reasoning. The first half of the project consists in representing part of the criminal law domain in a computable language, and experimenting with different techniques to see what kind of information can be given to the user. The second half consists in the extraction of legally relevant features from judgments of the Supreme Court, both manually and automatically, and then working on building a successful legal argument. The objective is the creation of a guideline for ensuring that a motivation/justification encompasses both the step-by-step nature of logic rules and the explanation through examples of case-based reasoning.

    You can join live or via WebEx.