Skip to content

Research

ICR is active in several strongly interrelated areas, notably

  • Normative multi-agent systems and deontic reasoning

  • Autonomous intelligent agents and their cognitive dynamics

  • Agreement technologies and computational social choice

  • Logic-based knowledge representation and nonmonotonic reasoning

The focus is on innovative formal techniques, whether with near-term practical applications or with inspiring visionary potential. The projects are supplemented and supported by cross-topical and preparatory research activities, as well as by diverse forms of international collaboration. In addition, we apply these research results in the following domains:

Current Projects

DELIGHT (FNR-Open) : Deontic Logic for Epistemic Rights
Practical and social reasoning is used in the foundations of explainable Artificial Intelligence for the design and engineering of legal and ethical reasoners, and the control and governance of intelligent autonomous systems. DELIGHT investigates deontic logics reasoning about epistemic rights such as the right to know, the freedom of thought and the right to believe, the right to not know, the right to not be misled, or the right to truth. We develop new deontic logics yielding a comprehensive formal analysis of epistemic rights and related legal and ethical concepts, and new reasoning methods to infer which duties follow from these rights, and whether concrete situations in real-life normative sys- tems comply with these rights. Moreover, we evaluate and validate our formal framework using COVID-19 pandemic related legislation and policies in various cultures and legal traditions. Finally, we provide interactive theorem provers to experiment with these new logics, formal models of epistemic rights, and AI applications. These new logics and reasoning systems together with the ap- plied methodology will set the stage for future knowledge representation and reasoning projects in the deontic logic community and develop key technology for AI applications using practical and social reasoning.

INTEGRAUTO (FNR-INTER) : What should we do? What can we do? What are we doing!? Studying the limits, problems, and risks associated to autonomous vehicles from an integrative approach
Technologies are being developed at an alarming rate and citizens are becoming more aware of the social issues related to technological developments. More re- cently, for example, autonomous vehicles have been under the spotlight given how rapidly these technologies came on the market and how quickly advance ments are made in the area. Despite these technological achievements, unfor- tunate incidents have increased awareness regarding the risks and social issues associated with autonomous technologies. As we saw with Uber and Tesla, for instance, it can be difficult to establish who is responsible in the event of an inci- dent when many parties are indirectly involved. Perspectives from human and social sciences on technological advances are deeply relevant. However, these perspectives can only have a limited impact on technological developments if these ethical and social considerations are not integrated to the practice and the industry. Beside a reflection upon sociopolitical aspects of emerging technolo- gies (e.g., accessibility or social inequities), it is important to understand how values can be reached by specific aspects of the conception and development of technologies. Hence, although theoretical considerations regarding the prin- ciples, values and norms that should guide technological developments are important, there is a pressing need to stop reflecting upon the theory and think of feasible ways to apply this theory to concrete cases. This can be understood as a shift of perspective from normative ethics to applied ethics. Accordingly, human and social scientists should not simply tell the industry the values they should be considering, but they should also be able to understand how these values can be integrated within technological developments in order to be able to participate to the coming of an ethical fourth industrial revolution. Human and social sciences should inform technological developments, but they should also be informed by natural sciences and engineering. As such, human and social scientists need to engage in a reciprocal dialog where their concerns are informed by the practice and the industry. Scholars in human and social sciences need to understand the constraints and limitation surrounding techno- logical developments in order to be able to integrate these concerns into their ethical and social reflections. Otherwise, human and social sciences will have a very limited impact on our technological future. The objective of this research project is to address the risks, limits and problems associated with autonomous technologies from an intersectoral approach by promoting interdisciplinary collaborations through students’ co-supervision and by establishing partner- ships between scholars and the industry. Specifically, our aim is to assess the risks and ethical issues surrounding autonomous technologies (e.g., ethical be- havior, responsibility, sustainable development, etc.) in light of their capacities as well as their physical and technical limitations.

EXPECTATION (FNR-INTER) : Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies aiming at explaining machine learning (ML) models, and en- abling humans to understand, trust, and manipulate the outcomes produced by artificial intelligent entities effectively. Although these initiatives have ad- vanced over the state of the art, several challenges still need to be addressed to apply XAI in real-life scenarios adequately. In particular, two key aspects that need to be addressed are the personalization of XAI and the ability to provide explanations in decentralized environments where heterogeneous knowledge is prevalent. Firstly, personalization of XAI is particularly relevant, due to the diversity of backgrounds, contexts, and abilities of the subjects receiving the explanations generated by AI-systems (e.g., patients and healthcare profession- als). Henceforth, the need for personalization must be coped with the impera- tive need for providing trusted, transparent, interpretable, and understandable outcomes from ML processing. Secondly, the emergence of diverse AI systems collaborating on a given set of tasks relying on heterogeneous datasets opens to questioning how explanations can be combined or integrated, considering that they emerge from different knowledge assumptions and processing pipelines. In this project, we want to address those two challenges, leveraging on the multi-agent systems (MAS) paradigm, in which decentralized AI agents will extract and inject symbolic knowledge from/in ML-predictors, which, in turn, will be dynamically shared composing custom explanations. The proposed approach combines inter-agent, intra-agent, and human-agent interactions to benefit from both the specialization of ML agents and the establishment of agent collaboration mechanisms, which will integrate heterogeneous knowl- edge/explanations extracted from efficient black-box AI agents. The project includes the validation of the personalization and heterogeneous knowledge integration approach through a prototype application in the domain of food and nutrition monitoring and recommendation, including the evaluation of agent-human explainability, and the performance of the employed techniques in a collaborative AI environment.

EAI (FNR-Core) : The Epistemology of AI Systems (EAI) Artificial Intelligence is playing an increasingly important role in our lives: from recommending specific products and websites to us, to predicting how we will vote in elections to driving our vehicles. It is also being used in ethically and socially important domains like healthcare, education and criminal justice. AI has the potential to greatly increase our knowledge by helping us make new scientific discoveries, prove new theorems and spot patterns hidden in data. But it also poses a potential threat to our knowledge and reasoning by ‘nudging’ us towards some kinds of information and away from others, creating ‘internet bubbles’, by reinforcing biases that are present in ‘Big Data’, by helping to spread and target political propaganda, by creating ‘deep-fake’ images and videos as well as increasingly sophisticated and human-like texts and conversations. The fundamental aim of this project is to investigate how we can rationally respond to the outputs of artificial intelligence systems and what is required to understand and explain AI systems. This is a topic that requires an inter-disciplinary approach, drawing on both computer science to investigate the details of recent AI advances but also on philosophy to investigate the nature of rationality, understanding and explanation. The issues here are especially important since many of the most powerful recent advances in AI have been achieved by training ‘Deep Neural Networks’ on vast amounts of data using machine learning techniques. This creates the unusual situation where even the designers and creators of these AI systems admit that they do not fully understand its internal processes or how the system will process new data. It is vital then that we investigate how we might produce explanations of the behaviour of these systems that humans can actually use and understand. It is also vitally important to investigate when and how it can be rational for human consumers to trust the outputs of systems trained via machine learning despite the fact that we lack full knowledge of their internal functioning or the data that was used to train them. One of our main hopes for this project is that we will be able to develop new ways of measuring how explain-able or how trust-worthy an AI system is that could eventually be implemented by computers.

LoDEx (FNR-INTER) : Logical methods for Deontic Explanations Deontic reasoning, which involves obligation and related notions, is highly important in a variety of fields—from law and ethics to artificial intelligence. The combination of deontic logic and formal argumentation provides a fruitful theoretical basis for modelling this type of reasoning. The three partners of the "LOgical methods for Deontic EXplanations" (LoDEx) project represent important developments in the area: Ciabattoni (Vienna University of Technology) addresses practical concerns in mathematics and logic, Straßer (Ruhr University Bochum) uses formal methods in philosophy, and van der Torre (University of Luxembourg) provides legal and ethical reasoners in computer science and artificial intelligence. Now they are joining forces to develop a formal theory of what they call deontic explanations. Deontic explanations provide reasons why some deontic notions hold and others do not. They provide answers to complex questions like "Why should a child be entrusted to its father, rather than its mother, given a specific context?" or "Should someone who follows the faith of Jehova’s Witnesses be forced to undergo a life-saving blood transfusion? Why (not)?". By targeting the understanding and transparent presentation of reasoning processes, deontic explanations are a major concern in many fields. Driven by case studies in (bio)ethics and law, LoDEx develops logical methods with tool support to formalise and reason about deontic explanations. By integrating both preference-based and norm-based explanations, LoDEx takes up the challenge raised by Makinson (1998) and Horty (2014) of formulating a unified logical theory combining several disconnected methods from the field of deontic logic. By means of formal argumentation and dialogues, explanations are tailored to ensure explainee comprehension with the generation of fine-tuned explanations relative to the explainee’s preconceptions and expectations. LoDEx fills a central gap in formal theories of normative reasoning, which so far have been concerned with justifications rather than personalised explanations. There is an urgent demand for deontic explanations in law and (bio)ethics, two domains where deontic reasoning is particularly involved and which are rich in conflicts. To meet this demand, the formal methods of LoDEx will be applied to and evaluated on key case studies from these areas. Computer-supported tools will be developed in order to experiment with the methods, the formal legal and bioethical theories, and applications of those theories. We will use the LogiKEy methodology for this purpose. The dissemination of the newly created LogiKEY data sets will enable reusability and facilitate implementations for the deontic logic community. LoDEx methods, tools and applications will contribute to the timely interdisciplinary challenge of providing formal foundations to more human-centred logic and reasoning.

Last-JD Projects

The objective of the Joint International Doctoral (Ph.D.) Degree in Law, Science and Technology (Last-JD) is to create an interdisciplinary integrated programme in order to face the new challenges that the information society and the newly emerging technologies will increasingly pose in the future in the legal domain and the socioethical field. The Doctorate offers an innovative and up-to-date integrated programme that empowers candidates to carry out cutting-edge research in different subjects such as bioethics, law and computer science for topics that require a genuine interdisciplinary approach. It includes a period of internship in the third year, where the candidate can apply his/her studies and complement his/her research with a professional experience. Upon completion of the programme, graduates will become highly skilled researchers and professionals, matching the requirements of public and private sectors. The following projects are currently active at ICR:

  • Cloud Computing Security Issues and Its Regulation
  • The Place of Legal Ontologies in Co-regulatory Compliance
  • Transnational Interactions among Legal Systems and Defeasible Logics
  • Legal Knowledge Framework in ODR – Definition of Legal “Relevant Information” in Consumer Law Disputes: Telecommunications and Air Transport Passengers
  • Privacy Protection Model for Online Social Networks

Finished Projects

  • AuReLeE (FNR-CORE): Automated Reasoning with Legal Entities

  • FMUAT (FNR-INTER): Formal Models for Uncertain Argumentation from Text

  • PRIMAT (FNR-AFR): Probabilistic reliability management and its applications in argumentation theory and tracking objects

  • SOUL (FNR-AFR) Subjective and Objective Uncertainty in Description Logics

  • ProCRob (FNR Proof of Concept (POC) project): Programming Cognitive Robots

  • IELT (AFR) : Information Extraction from Legislative Texts

  • NORM (AFR): Norm based deontic logic

  • MIREL (EU-RISE, EU-Marie Curie) : Mining and Reasoning with Legal Texts

  • ProLeMAS (Marie-Curie) : Processing legal language for normative Multi-Agent Systems

  • SIEP (FNR-INTER): Specification logics and Inference tools for verification and Enforcement of Policies

  • RATARCH (FNR-CORE): Rationalization of Architecture Related Design Decisions

  • RAT (FNR-MOBILITY): Reasoning about Agreement Technologies - 6 month sabbatical of Prof. van der Torre at CSLI Stanford

  • SGAMES (FNR-CORE): Security Games

  • MARCO (FNR-CORE) : Managing Regulatory Compliance: a Business-Centred Approach

  • LAAMI (FNR-CORE) : Logical Analysis of Market Irrationality

  • DYNARG (FNR-INTER/CNRS) : Dynamics of Argumentation

  • LINMAS (UL-POSTDOC) : Logics Integrated for Normative Multi-Agent Systems

  • TRUSTGAMES (AFR-POSTDOC) : Trust Games

  • CFAEMM (AFR-POSTDOC) : A Computational Framework for Apprehending Evolving Malware and Malware Engineers

  • CDL (ERCIM-POSTDOC) : Argument-based Contextual Defeasible Reasoning

  • ICR (UL-PHD) : Individual and Collective Reasoning

  • AASTM (UL-PHD) : Argumentation Techniques for Trust Management

  • LOSEC (AFR-PHD) : Logics for Security

Collaboration