Abstracts and Recordings

Oct 15 Tuesday

Huimin Dong, TU Wien

Permissible Shared Knowledge

Information is exchanged and shared, but it is possible for an agent to gain knowledge that they are not authorized to know -- which is regulated by GDPR. How can we represent and reason about permissible shared knowledge in a way that aligns with GDPR requirements? Information exchange has been extensively formalised across various logical frameworks in distributed systems, characterized by diverse information-sharing patterns. These approaches generally adopt an intersection perspective, aggregating all possible information, regardless of whether it is known or unknown to the agents. In contrast, this work takes a distinct approach, emphasising that sharing knowledge means distributing what is known, rather than what remains uncertain. This paper introduces new modal logics for shared knowledge, ranging from a novel language of knowledge sharing to a deontic extension that models permissible shared knowledge. It also presents their sound and strongly complete axiomatizations and explores how responsibility and accountability, as required by GDPR, can be represented.

Josephine Dik, TU Wien

Exploring Nuanced Permissions within AI and Judicial Decision-making - Watch the recording

The notion of permission is of crucial importance in several settings, from law and ethics to artificial intelligence and normative reasoning. While deontic logic approaches often interdefine permission with obligation and prohibition, it is a complex notion with multiple interpretations. In this talk, I analyze permissions through the lens of preference logic, identifying and exploring three nuances: 'rather-so' permissions, advocating specific actions; 'better-not' permissions, allowing actions while expressing a preference against them; and neutral permissions, permitting actions without bias. The permissions are formalized via Hilbert axioms and ceteris paribus preferences and integrated into Standard Deontic Logic. By doing so, we create a richer framework that better reflects real-world decision-making, particularly in the fields of AI and law, where balancing permissions and preferences is crucial. We demonstrate how this framework can be applied to model complex decisions, offering a more nuanced approach to normative reasoning in these domains.

David Fuenmayor, Universität Bamberg

The LogiKEy methodogy in the era of LLMs

The LogiKEy methodology is an approach to knowledge engineering that combines logic-based knowledge representation with automated reasoning tools to build trustworthy and verifiable AI systems. It focuses on using combinations of higher-order and non-classical logics to model complex domains, particularly where normative and epistemic issues play a pivotal role. This talk will introduce the key principles of LogiKEy, showcasing some preliminary examples in legal and ethical reasoning. Additionally, we will explore the potential relevance and limitations of the LogiKEy approach in the era of large language models (LLMs).

Daira Pinto Prieto, Universiteit van Amsterdam

Multi-Layer Belief Model: Justified Belief for Uncertain and Conflicting Evidence - Watch the recording

Autonomous agents typically gather evidence from the environment to make decisions. One way of doing this is to base these decisions on the degrees of belief an agent has for different propositions. In this context, the first challenge is how to compute justified beliefs based on evidence. Moreover, in real-world situations, this evidence may be inconsistent, incomplete, or uncertain, making the problem of combining evidence highly non-trivial. The aim of this talk is to present a new model for measuring degrees of beliefs based on evidence that may have these challenging properties. To this end, we will start by introducing two different established approaches that address the above problem: Dempster-Shafer theory and topological models of evidence. We will show their main advantages and limitations, and how combining some of their tools we got a more general model. This model can reproduce them when appropriate constraints are imposed, and, more notably, it is flexible enough to compute beliefs according to various standards that represent agents’ evidential demands. The latter novelty allows the users of our model to employ it to compute an agent’s (possibly) distinct degrees of belief, based on the same evidence, in situations when, e.g, the agent prioritizes avoiding false negatives and when it prioritizes avoiding false positives. Moreover, computing belief degrees with this model is #P-complete in general, the same computational complexity as applying Dempster-Shafer theory.

Matteo Cristani, Università degli Studi di Verona

Legal Coding as a Software Engineering problem

The recent developments of research in the field of legal reasoning have been providing room for the notion of legal coding, or, in a more complex vision of the legal design process, the Machine Readability by Design for legal texts. Although this is an issue on which a large number of scholars have proposed conceptualizations, there is still a limited number of investigations on the problem of measuring effectively the effort to provide a formal coding of a legal text, and a significant comparison to what could be done alternatively, including the automated translation, and the usage of Generative AI.

In this discussion we illustrate the results of a bunch of experiments that aimed at providing a complete scenery of the possible approaches and a clear decision tree on the different choices to be made, based on rational and experimentally valid engineering methods. The efforts have given out a form of measurement that is very similar to what we saw applied in the current literature of software engineering, especially for what concerns the agile approaches, and this allows us to envision what we name an agile approach to legal coding, in particular on the by design paradigm.

Lara Lawniczak, Universität Bamberg

Towards Automated Ethical Reasoning: Representing the AI Act in Higher-Order-Logic

This talk presents my master's thesis, which explores the feasibility of representing the European Union’s AI Act in Higher-Order Logic (HOL) to enable automated ethical reasoning. By analyzing the AI Act's modalities, the thesis employs existing embeddings of Standard Deontic Logic (SDL) and Dyadic Deontic Logic (DDL) to model Obligations and Contrary-to-Duty Obligations in Isabelle/HOL. To address more complex concepts such as Agency and Agentive Obligations, embeddings of Temporal Deontic STIT logic and extended variants of DDL are introduced. The strengths and limitations of these logical embeddings are evaluated by formalizing sections of the AI Act and leveraging Isabelle/HOL's automated theorem provers, satisfiability modulo theories solvers, and model finders. The findings offer insights into the challenges of formalizing legal reasoning in HOL, shedding light on the capabilities and current limitations of automated reasoning tools in this complex legal context.

Silvano Colombo Tosatto, CSIRO

Proving Regulatory Compliance: From Theoretical Complexity to Some (maybe applicable) Solutions

In this presentation I go over the problem of proving regulatory compliance of process models. The problem consists of verifying whether the executions of a process model comply with a given set of regulatory requirements. Specifically, the presentation mainly focuses on proving full compliance, where the aim is to verify that every possible execution of the model satisfy the given requirements.

Starting from overviewing the current understanding of the computational complexity of the problem, this presentation briefly discusses some intuitions regarding which features of the problem can be considered more impactful in terms of complexity. After this more theoretical discussion the presentation focuses on introducing and discussing two approaches solving the problem of proving full compliance of process models. Of these two approaches, one focus on solving a variant of the problem in P through a structural analysis of the process model, and the other tackles a variant of the problem in NP, and adopts a brute force approach to provide a solution.

Guido Governatori, Central Queensland University

Weak Permission is not Well Founded - Watch the recording

A weak permission is a permission in force when we fail to obtain the obligation to the contrary. In situations where two obligations conflict, we cannot definitively endorse either one. Thus, we can conclude that the obligation's content is weakly permitted. However, Well Founded Semantics is unable to derive these weak permissions. Considering the relationships among Well-Founded Semantics, Grounded Semantics, and Stable Semantics, we find that none of these frameworks can adequately capture the concept of weak permission. In the second part, we introduce a variant of Defeasible Deontic Logic that effectively addresses weak permissions resulting from deontic conflicts and allows for various methods of propagating conflicts to different conclusions.


Oct 16 Wednesday

Xavier Parent, TU Wien

Defeasible conditional ought in 2-D

I look at an open problem raised by Horty regarding the ability of the preference-based approach to conditional obligation to model defeasible reasoning. I begin by summarizing his objections and then present a semantic account designed to address them. Nonmonotonicity is obtained, by parametrizing the consequence relation by a set of conditionals, referring to e.g. the normative system under consideration. A two-dimensional (2-D) semantics is employed, distinguishing between ideality and normality in the models, thereby reviving an older proposal in deontic logic. In a second step, the account is extended to support conflict resolution through a partial ordering on the set of obligations. A connection is established with some existing accounts, including so-called constrained input/output (I/O) logic-an existing standard for normative reasoning based on a different methodology.

Luca Redondi, Ruhr-Universität Bochum

A Formal Model for Constructing Reasons and its Application to the Study of Accrual - Watch the recording

It is common to frame practical deliberation as the practice of weighing reasons; yet little attention is typically paid to the reasoning processes that enable to construct these reasons in the first place. These are, however, at the core of practical inference. For instance, suppose I am deciding whether to go jogging or not. I note that the sky is cloudy and conclude that I have a reason not to jog. How did I reach this conclusion? A plausible explanation is that clouds indicate that it might rain (1), rain causes discomfort (2) discomfort is bad (3) and therefore to be avoided. In order to do justice to the complexity of such inference patterns, I present a formal model of practical reasoning that allows to reason with defaults (item 1), relate facts to values (item 2), and associate values with a polarity (item 3). These resources are employed for settling an inquiry on which reasons obtain before turning to the evaluation of which of them prevail. Such a division of labour opens an original perspective on some well-known puzzles about accrual of reasons. I target two problems in particular: 1) in some cases, the accrual seems not to be stronger than its components taken singularly; 2) sometimes the accrual apparently switches polarity w.r.t. its components. The formal model expands on and inherits some of the benefits of the DAC-system of sequent-based argumentation (van Berkel, Straßer, Zhou, COMMA 2022, 2024).

Eliot Watkins, Ruhr-Universität Bochum

Reasons and Inheritance

According to the standard Kratzerian semantics for deontic modals – the orthodox semantics in linguistics – ought p is true iff (roughly speaking) all the relevantly best worlds are p-worlds. The standard semantics validates the principle of Inheritance: whenever p entails q, ought p entails ought q.

In this talk I produce a case against the standard semantics by looking to the connections between oughts and reasons. For example, if I ought to drive on the left, then there must be a reason for me to drive on the left. If I ought to drive on the left, then my reasons to drive on the left must be stronger than my reasons to drive on the right.

That the standard semantics appears to be silent on the connections between oughts and reasons is old news. But I think that advocates of the standard semantics are in a tighter spot than they might realise: I’m going to argue that the standard semantics not only fails to capture these connections, it’s actually incompatible with them.

The culprit is Inheritance. I’m going to make a case for the idea that reasons do not transmit over entailment: when Φing entails Ѱing, reasons to Φ needn’t be reasons to Ѱ. Reasons to drive on the left needn’t be reasons to drive on the left or right. Good reasons to drive on the left needn’t be good reasons to drive. My conclusion is that taking the connections between reasons and oughts seriously requires endorsing a semantics for ‘ought’ that doesn’t validate Inheritance.

Blaž Istenič Urh, TU Wien

What is there to explain? Some initial remarks on deontic explanations

When reasoning about normative matters, we often do not just want to know that a certain norm-related fact holds but also why. This, in a nutshell, is what deontic explanations are about. This talk will present a general picture of the different possible senses of deontic explanations and also some initial remarks on how they could be modeled in a logical setting. In particular, one might want to explain why a norm holds by employing system-internal means, or one might want to know what is the property of the system that is responsible for that fact, possibly in order to be able to compare it to other systems. In addition, explanations can also be system-external: to explain a norm can mean to provide the reasons for why it is reasonable, possibly through the eyes of the norm-giver or the norm-receiver, but sometimes also to appeal to the "actual" reasons for why that norm holds. The talk will address which of those senses can be most straightforwardly treated by the available logical systems, and provide some initial definitions of (types of) deontic explanations applied to some particular cases. Finally, some connections between deontic explanations and work in explainable AI will also be discussed.


Oct 17 Thursday

Clayton Peterson, Univ. du Québec à Trois-Rivières

Ethics and automation: Beyond value-ladenness

Artificial moral agency is usually conceived on the grounds of Moor’s distinction between ethical impact agents (i.e., agents whose actions bear ethical consequences) as well as between implicit (i.e., ethical constraints and principles imposed beforehand in the programming and design), explicit (i.e., ability to represent ethics and make choices on the grounds of this knowledge) and full ethical agents (i.e., ability to make explicit ethical judgment and justify them). Based on the idea that the objectivity and rationality of ethical decisions can be reached through the automation of ethical reasoning, various attempts at defining ethical agents meant to solve ethical dilemmas have been proposed within the literature. On these grounds, and with respect to specific ethical theories, some scholars have considered these automated decision procedures as sufficient to produce ethically justified choice. However, recent results show that automated choices can be predetermined by technical considerations that, arguably, are not value-laden, thus casting doubts on the sufficiency of automating ethical reasoning as a solution to solving ethical dilemmas.

Tianwen Xu, Zhejiang University

Norm Identification through Reflective Equilibrium

Norms are one of the important elements that agents need to consider for decision-making besides their belief, desire and intention. The problem of norm identification considers open multi-agent systems in which no central authority imposes or proposes norms, and considers how agents can identify norms that are already prevalent therein. Existing techniques for norm identification mostly rely on big data and machine learning, vulnerable to normative inconsistency and unable to derive moral defense for agent's decision based on the identified norms. Instead, this work proposes a logical approach to norm identification, with inspiration from the philosophical method 'reflective equilibrium'. Under this approach, norm identification is regarded as identifying a coherent normative theory from potentially inconsistent normative observations. A normative theory is then represented as an argumentation theory, constructed incrementally and revised accordingly to maintain consistency. By such theory, an agent can generate consistent moral judgments in situations where the theory applies, and derive arguments for its judgments. This talk is based on joint work with Jieting Luo.

Joris Graff, Universiteit Utrecht

Numeric Default Logic: A Reason-Based Framework for Automatised Moral Decision-Making

Increasing use of artificial intelligence in domains where morally salient decisions are frequent has led to calls for AI systems that can simulate moral reasoning. This presentation proposes Numeric Default Logic (NDL), a framework for modelling moral reasoning in a way that lends itself to computational implementation. The framework is based on the idea, prominent in moral philosophy and meta-ethics, that moral decision-making is primarily a matter of weighing different moral reasons. NDL represents these moral reasons as the premises of defaults, i.e. defeasible reasoning steps linking moral reasons to ought-statements representing moral decisions. Contrary to earlier work modelling reasons as the premises of defaults, due to John Horty, NDL assigns numeric values to moral reasons, defaults, and ought-statements. This allows for a more fine-grained representations of the interaction between different moral reasons. As a result, NDL is able to model certain common patterns of moral (or otherwise practical) reasoning in an intuitive way, including cases where moral reasons 'add up', and cases where moral reasons instead weaken rather than strengthen each other. Moreover, the representation of NDL in a graph structure allows learning the strength of reasons through backpropagation methods, with promising implications for application to ethical AI.

Alessandra Palmigiano, Vrije Universiteit Amsterdam

Obligations and permissions, algebraically, selfextensionally, and (inverse) correspondence-theoretically - Watch the recording

I will present a recent research line which proposes subordination algebras and related structures as the algebraic semantics of normative systems and various types of permission systems in input/output logic. The connection between input/output logic and algebras paves the way to a systematic study of normative and permission systems in settings in which reasoning is captured more accurately by nonclassical logics, facilitated by a general, powerful, and systematic availability of mathematical resources, techniques and insights. For instance, this connection gives access to correspondence and inverse correspondence techniques well known from modal logic, using which, modal characterizations can be achieved of classes of conditions on normative and permission systems, as well as their interaction.

Robin Martinot, Universiteit Utrecht

How proof systems model philosophical properties of mathematical proofs: a case study

This talk relates generally to the relationship between mathematical proofs in practice, and how formal systems represent these proofs. The formal systems we look at concern natural deduction proof systems. Generally, the existence of a formalized proof is taken as a standard for correctness of this proof (called the ‘standard view’ in the literature, see e.g. (Avigad, 2021)). But formal proofs are conceptually still far-removed from informal proofs (proofs in the way they are actually carried out), and not much is known about their specific relation. For instance, the derivation- indicator view (Azzouni, 2004) says that a mathematical proof ‘indicates’ an underlying formalized derivation, but how exactly this should happen is unclear.

We will take a relatively high-level strategy to relate informal proofs to formalized derivations, by zooming in on a particular philosophical property that an informal proof possesses, and see how it can be made precise in the formal setting. In particular, we will take the case study of ‘purity’ of proof (see (Martinot, 2024)). Formalized proofs that satisfy a precisified requirement for purity can be taken as corresponding more closely to pure informal proofs of the same theorem.

Purity is an ideal of proof that has a long history with roots in early writings of Archimedes (Detlefsen, 2008). It concerns the idea that a proof should only draw upon notions that belong to the content of the theorem. Impure proofs distinguish themselves from pure proofs by making use of concepts that are ‘extraneous’ to what a theorem is about. We will provide an ‘ontological’ interpretation of the content of a theorem, and we take a certain mathematical theory to be able to capture such an ontology. Natural deduction proofs using the axioms of this theory are then (fully) ontologically pure. Next, using definitional extensions and the interpretation translation of (Visser, 1997), we motivate extensions of ontological purity that render more natural deduction proofs pure. For instance, if we translate a pure natural deduction proof by means of the interpretation translation, we argue that the translated proof possesses a secondary level of ontological purity.

Ali Farjami, University of Luxembourg

LogiKEy Plus: A Toolkit for Kids in 2030 - Watch the recording

In this talk, I will present recent work on the algebraic approach to normative systems, focusing on the LogiKEy methodology. LogiKEy offers a versatile framework for designing logical systems with applications in artificial intelligence and normative systems. Our work extends these ideas, exploring the potential for modular deontic logics that can be applied in increasingly complex domains by 2030. I will highlight recent algebraic and topological advancements that extend the boundaries of structured, customizable input/output logical systems, with a focus on future applications such as automated ethical reasoning. I will use the example of toy design as a tangible way to explore normative systems, rather than ethical and legal theories. These toys could serve as interactive tools, introducing normative concepts in an approachable and engaging manner. The discussion will reflect on how LogiKEy can shape the architecture of adaptive and autonomous systems in the years ahead.

Kees van Berkel, TU Wien

Deontic Explanations Through Dialogue and 5 Key Challenges

Deontic explanations answer why-questions concerning agents’ obligations (permissions, rights, etc). Normative systems are notoriously conflict sensitive, making contrastive explanations pressing: “Why am I obliged to do 𝜙, despite my (seemingly) conflicting obligation to do 𝜓?” In this talk, I present a model of contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output logic. The work is a collaboration with Christian Strasser (published at ArgXAI2024). We harness recent results on the correspondence between I/O reasoning and formal argumentation to leverage existing dialogue methodology to model contrastive deontic explanations. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues by identifying four subdialogues: i) explaining the claim, ii) rejecting the foil, iii) justifying the contrastive link, and iv) demonstrating the explainer’s coherence. This work is the first of its kind and for this reason I will highlight and discuss 5 key future challenges for deontic explanations through dialogue.

Marija Slavkovik, University of Bergen

Is the governance of machine agents a social choice problem? - Watch the recording

To be effective artificial agents need to reason with norms and align with values. Norms are not constrains, in the sense that an agent may choose to disobey them. Values are underspecified, context dependent and possibly mutually contradictory in a possible choice an agent is considering to take. Both these task are hard challenges to execute by computation even when it is clear what the desires and goals of the agent are. Artificial agents are not individuals. They are not fully autonomous because people determine their desires. They are also not unique: a type of agent serves or operates on behalf of many people. Thus we face the problem of who decides the morals and norms of an artificial agent? This talk reflects on the possibility of using social choice, elections if you will, to decide the desires, values, goals of an artificial agent. What can be accomplished by social choice what can possibly go wrong by using it?