The First Luxembourg Workshop on Epistemology & AI

About Schedule Abstracts Location Questions & Registration


Image of a brain as a circuit board
>

June 28th 2022 - June 29th 2022, Luxembourg University


General Information

Artificial Intelligence technologies are becoming more and more entrenched in our lives, affecting us at both societal and individual levels. This gives rise to a number of epistemological questions, for instance: How do the AI technologies connected to the internet and social media change the way we form beliefs and gain/lose knowledge? Is reliance on AI support tools any different from the more familiar forms of relying on testimony? How do AI technologies help propagate fake news? Can they be used to combat fake news?

Other epistemological questions - or questions that have a substantial epistemological component—concern contemporary AI technologies themselves, for instance: In what sense do machine learning algorithms learn and what is the rational way to learn from evidence? What is a black-box algorithm and how is it different from one that is not? What does it mean to make a decision of a black-box algorithm transparent? How are trade-offs between transparency and accuracy to be resolved?

The workshop aims to provide an interdisciplinary forum to facilitate progress on these and related questions, as well as other questions where AI and epistemology intersect (e.g., how ideas from AI might be used to tackle foundational questions in epistemology).


Location


Luxembourg University, Campus Belval
Maison du Savoir, Room 4.520


View Larger Map

Schedule

Abstracts

KONSTANTIN GENIN (Tübingen)
Randomization, Causal Discovery and Individualized Treatment

Increasingly, algorithmic interventions are being held to the same standards of justification as traditional clinical or policy interventions. Accordingly, there is a call to evaluate algorithmic interventions with the “gold standard” methodology: randomized (clinical) trials. At the same time, alternative approaches to causal discovery are emerging from machine learning, economics, epidemiology and allied fields. In this talk, I revisit the methodological and ethical justification of randomized trials in light of recent developments. What I call the “tragic view” of clinical research ethics holds that randomization is both (1) necessary to secure some crucial epistemic good and (2) incompatible with the ethical requirement of individualized medical treatment. However, existing methodological justifications of randomization primarily argue that randomization is sufficient for securing some crucial epistemic good. Moreover, new methodologies for causal discovery are also sufficient for securing the same good and less hostile to individualized treatment. I argue that in light of these results the “tragic view” must be reevaluated. Either randomization must be shown to be necessary for securing some epistemic good that alternatives fail to secure, or it must be abandoned.



SARA MANN (Dortmund)

Understanding via Exemplification and XAI

Artificial intelligent (AI) systems have proven to be efficient tools in numerous contexts, including high-stakes scenarios such as autonomous driving or medical diagnosis. Several of these application contexts involve image classification. Since many AI systems used for this task are considered to be opaque, research in explainable artificial intelligence (XAI) develops approaches which aim at rendering their inner workings understandable -- not only to computer scientists, but also to domain experts and other AI novices interacting with the system. I will focus on this group of AI novices in this talk. I show that Catherine Elgin’s work on exemplification offers a useful framework in this context. An effective example provides epistemic access to contextually relevant facts by exemplifying features it shares with its target. By looking through the lens of exemplification, we can evaluate the extent to which XAI approaches aiming at explaining image classification are able to induce understanding regarding why an image is classified as belonging to a particular class. Based on these insights, I suggest to draw a conceptual distinction between samples, which are any images instantiating the exemplified feature(s), and exemplars, which are idealized visualizations intentionally designed to emphasize only those features we want to exemplify in a given context. I argue that current XAI methods usually provide us with samples. In those rare cases where exemplar-like visualizations are provided, these are either difficult to interpret, or fail to convey potentially relevant information. I thus propose to lay more research emphasis on XAI approaches generating exemplars which rectify these shortcomings by being specifically tailored to convey understanding in a given context.



TYLER MILLHOUSE (Santa Fe Institute)

Structure, Not Content

Over the last decade, deep neural networks have accumulated a dramatic list of achievements and have come to dominate a diverse set of machine learning domains---from computer vision and natural language processing to recommendation systems and game playing (Goodfellow et al., 2016). At the same time, deep neural networks have presented AI researchers with unprecedented challenges arising from their notorious opacity and inscrutability. Such problems have motivated a rapidly growing literature on explainability in deep neural networks and new tools for discovering what deep networks have learned and how they apply this knowledge. While explainability is an extremely important goal, we must be realistic about the near-term prospects for achieving it—especially where the content of neural network representations is concerned. As I will argue, we can learn a great deal through the careful analysis of neural network architectures. Understanding the subtleties of these architectures can shed considerable light on the nature of important cognitive tasks and the strategies neural networks use to solve them. As I will argue, these lessons are not only robust to a variety of discoveries we might make about neural network representations, they are also much more likely to generalize to other machine learning models and to biological neural networks.



ANNA-MARIA EDER (Cologne)

Compromise and Evidence (TALK CANCELLED)

How do members of a group reach a rational epistemic compromise on a proposition when they have different (rational) credences in the proposition? I answer the question by suggesting the Fine-Grained Method of Aggregation. I show how this method faces challenges of the standard method of aggregation, Weighted Straight Averaging, in a successful way. Finally, I discuss how my results are relevant to AI.



CHRISTOPH SCHOMMER (Luxembourg)

Interpreting analysis results

Today, more than ever, AI is understood as the application-related symbiosis of machine learning methods with the field of data science. Data is collected, processed, analysed and finally evaluated. It can be observed that the evaluation and selection of the results as well as their interpretation are primarily based on the use of statistical values such as Precision, Recall, F-measure, agreement values, and others. A lack of explainability in connection with deep learning-methods is accepted. Against this background, will humans still be needed in the future to carry out these symbiotic and complex processes? Can the entire process of data analysis, including the interpretation of the results and the presentation of the findings, be automated and placed in the hands of artificial systems?



SILVIA MILANO (Exeter)

Epistemic obstacles to evaluating recommendations (TALK CANCELLED)

Abstract to be added



PETER BRÖSSEL (Ruhr University Bochum)

Confirmation and the Pitfalls of Ignorance

The standard approach to Bayesian confirmation theory (BCT) is build upon the two tenets of Bayesian epistemology: (i) rational epistemic states can be identified with probabilistic degrees of belief, (ii) rational learning, i.e., the updating of one's epistemic state, is regulated by some form of conditionalization.  In line with these tenets, BCT holds that the rational degrees of belief in some hypothesis in the light of the evidence are a function of (i) the \textit{a priori} degree of belief of the hypothesis and (ii) the degree of confirmation of that hypothesis in the light of the evidence. This approach to understanding confirmation has been tremendously successful in philosophy of science. This is due, in part, to the many a priori arguments for its foundation, i.e., the two tenets of Bayesian epistemology. In other parts, it is due to the many successful applications of BCT to answer various questions in philosophy of science as well as in neighbouring fields such as cognitive science and computer science. Despite the success of standard Bayesian confirmation theory, various lines of criticism have been raised. According to the first line of criticism, BCT does not allow us to adequately represent rational scientists' epistemic states. Rational scientists are permitted to not take a stance with respect to the question of whether a piece of evidence confirms, disconfirms, is independent of a hypothesis. The second line of criticism concerns the learning dynamics of BCT. According to it, a scientists a priori degrees of belief should not dominate her epistemic life. However, so the criticism goes, this is exactly what BCT requires. The present paper replaces the standard approach to BCT with a higher-order approach: the resulting theory I call higher-order Bayesian confirmation theory (HOBCT). Obviously. the proposed account is supposed to solve all the mentioned problems ;). 



DANIELA SCHUSTER (Konstanz)

Suspension in Machine Learning Systems (TALK CANCELLED)

A key question concerning the appropriate attribution of the notion of artificial intelligence is to what extent artificial systems can act autonomously and make decisions by themselves. In this talk, I want to focus on a largely neglected aspect of decision-making competence, which is the capability of actively refraining from deciding. I will introduce different Machine Learning (ML) models that belong to a research area in computer science that is called “Abstaining Machine Learning” and I will categorize them into two different classes. Next, I will relate this debate to the current epistemological debate about suspension of judgment. Most scholars in this field argue that suspending is a more elaborated form of doxastic neutrality that is not to be identified with other forms such as mere non-belief. In exploring whether the different classes of abstaining ML correspond to forms of neutrality in Epistemology, I will argue that only one class of models potentially qualifies for meeting the higher standards for suspension of judgment. I argue that these findings show that the performance of ML systems with respect to their independence in refraining from deciding can serve as a useful indicator for evaluating their level of autonomy.



NANCY ABIGAIL NUÑEZ HERNÁNDEZ (Czech Academy of Sciences)

Explaining AI through Justification Logic.

Abstract: One of the main concerns regarding the rapid development of artificial intelligence (AI) is its ability to generate explanations not only about its functioning but also about its application domain. For example, the widespread use of models developed using machine learning in scientific research has been widely debated because of the opacity inherent to these models, making them fall short when explaining and enhancing our understanding of the researched phenomena. Given the need to find alternative ways to overcome these concerns, in this talk, I will propose to use justification logic to model and develop explanations based on decision tree algorithms, which are crucial to most machine learning techniques. Thus, this talk aims to show how an AI model (i.e., machine learning), commonly referred to as a ``black box'' has explicative features that can be exploited to enhance our understanding of AI and foster our trust in it. In this way, this proposal will allow us to understand AI's potential to offer explanations and not only ad hoc solutions to practical problems. Moreover, using justification logic to model and develop explanations can offer new insights and constraints into the broader philosophical discussion of scientific explanation.



GREGORY WHEELER (Frankfurt)

Visualizing high-dimensional loss landscapes

Analyzing geometric properties of high-dimensional loss functions, such as local curvature and the existence of other optima around a certain point in loss space, can help provide a better understanding of the interplay between neural network structure, implementation attributes, and learning performance. But one may ask, how do the curvature properties in low-dimensional visualizations of high-dimensional functions depend on the curvature properties of the original, high-dimensional loss space? One approach currently in use is to take random two and three-dimensional loss projections, but we show that in general this method fails to meaningfully represent convexity and concavity properties of high-dimensional loss functions. We instead propose to project loss functions along dominant negative and positive Hessian directions to provide a clearer visual understanding of saddle information in high-dimensional spaces, and we lay out a method to efficiently extract Hessian trace information from polynomials that have been fitted to one-dimensional random loss projections. We demonstrate the advantages of this approach through numerical experiments on images classifiers with upwards to 7 million parameters. (Joint work with Lucas Böttcher.)

Registration and further Questions

The workshop is open to members of the University of Luxembourg and free of charge.

If you plan to participate, we’d appreciate if you sent a short email at lux.epistemology.ai@gmail.com.

For any further questions, please contact Aleks Knoks (Computer Science) or Thomas Raleigh (Philosophy)