Am 20.10.2022 war ein Team von Algoright mit einem Beitrag zu Gast beim Digital WorkshopReflection on intelligent systems: towards a cross-disciplinary definition” des Interchange Forum for Reflecting on Intelligent Systems (IRIS) in Stuttgart — und zwar in eigener Sache: Wir haben unser Selbstverständnis unserer Rolle als Schnittstelle von Wissenschaft und Gesellschaft in Hinsicht auf intelligente Systeme beschrieben und diskutiert.

Wir haben die Diskussion sehr genossen und freuen uns, bald wieder mit den Menschen von IRIS in den Austausch zu treten. Das folgende extented abstract, mit dem wir uns auf unseren Vortrag beworben hatten,  fanden wir zu schade für die Schublade. Immerhin bringt es gut auf den Punkt, was wir tun, wie wir arbeiten und warum das im Sinne der Gesellschaft ist.

Wir sind gespannt auf Feedback und Input, gerne via Twitter oder Mail an info@algoright.de.

 

On the Role of the Enablers of Society-Wide Critical Reflection on Intelligent Systems

What digitization means (to us)

Digitization is permeating society in many ways and in a wide variety of dimensions. In a narrow sense, digitization is just the process of making something digital that previously was non-digital. In the broader, more interesting sense it can also be taken as the transformation towards a more digital society, which includes a more prevalent use of artificial systems, an increasing reliance on digital technologies, and the shift towards a reliance on the decisions made by computers in many areas of society. It is the broad sense of digitization we are talking about. 

As so often, opportunities and risks often go hand in hand. Continuous critical reflection and constant readjustment are, therefore, required in order to shape the process in an ethically and socially acceptable manner, sustainably, and in accordance with shared values. In enlightened, free, and democratic societies, this shaping of digitization must be a process involving society as a whole and, consequently, must not be restricted to experts, scientists and developers. To this end, digitization requires digital skills and fundamental digital literacy across society.

Intelligent systems and society

However, understanding of intelligent systems is more than just a democratically required prerequisite for the acceptable use of intelligent systems as part of our everyday world. It also alleviates fears, enables control, and promotes feelings of security. Knowledge and understanding are thus prerequisites for the sustainable acceptance of such systems as well. Furthermore, it also enables us as a society to decide collectively in an instrumentally reasonable way about the allocation of resources, of money, and of focus, in the sense of being in the (epistemic) position to competently decide what kind of technological process best suits our interests. Do we want autonomous vehicles at all? Do we want automated assessments in application processes, whether in the private sector or in higher education? Do we want evaluation of surveillance camera footage that is supported by artificial intelligence (AI)?

As if it were not difficult enough to convey such knowledge and understanding to the public with respect to digitization in general, the subject of intelligent systems is a particularly challenging one in that regard. Due to their additional complexity, due to the learning components that are often involved, due to the sometimes inherently opaque nature of the models, and due to their autonomy (in a purely technical, non-philosophical sense), intelligent systems and their societal implications are even more difficult to understand, assess, and communicate than other aspects of digitization. At the same time, they visibly affect high-risk contexts.

The need for learning processes

Both digitization in general and AI in particular are advancing too quickly to leave the task of conveying the relevant knowledge and understanding to schools and other established educational institutions. The underlying processes cannot simply be frozen for a decade or two –  which  would be necessary if we wanted to rely on school-based education alone. This raises questions about the implementation of competency-based approaches in formal education (How do we want to learn in “controlled slower systems”?), and about a stronger interweaving with informal education (How do we learn outside educational institutions?), both to meet the demand for lifelong learning and to consider the dynamics of content.

Consequently, a transfer of corresponding knowledge from universities, as well as from the research and development departments of private companies into society must be established. This does not only refer to individual citizens, but also to political and economic decision-makers, medical associations, health insurance companies, state media authorities, courts, ministries, parliaments, managers, chambers of labor, unions, and many other specific groups. They all decide on the application of intelligent systems and their regulation, either directly or indirectly. In terms of democratic theory, therefore, they all need the knowledge to be able to represent their interests competently. This task of imparting knowledge must be undertaken, to a large extent, by institutions, by certain societal agents. Let us call them the enablers of society-wide critical reflection.

Enablers of society-wide critical reflection

Obviously, such enablers have a number of tasks. They need to gain a holistic understanding of technically, socially and ethically highly complex topics as well as to translate these topics to a variety of different target groups, so that they can be communicated effectively through a variety of channels. 

Another function that such enablers could and should plausibly fulfill is not quite so trivial. Since those enablers will involve experts from diverse fields – how else should the translation and preparation work succeed? – such a social institution can also work in the other direction and transport questions, fears, expectations, etc. of the diverse social agents back into science. This in turn makes it possible to provide motivational and targeted offers that can address both the heterogeneous groups.

We at Algoright see ourselves as such an enabler and have come together for this very purpose. Algoright e.V. is a non-profit association under German law that sees itself as an interdisciplinary think tank for good digitization. Issues such as discrimination by unfair models, their ill-conceived application, the danger of pseudo-surveillance, and responsibility gaps are topics we address from different perspectives for various social players. Buzzwords such as “Responsive AI” and “Trustworthy AI” need to be filled with meaning, and the black-box issues need to be broken down in their various facets.

This requires not only a deep technical understanding, but also an understanding of social contexts and relationships, of how intelligent systems are embedded in an ever growing number of applications and social contexts. Consequently, computer scientists, philosophers, mathematicians, psychologists, sociologists, pedagogues, didacticians, legal scholars, and scientists from other disciplines must work closely together and develop a common understanding of each other, if not even a common language.

Our contribution is a brief self-description not only of our self-image and our organization, but above all of our processes as they have crystallized over the years and continue to do so. In this context, the critical reflection of intelligent systems is the focus of our activities in three respects: first, internally, as part of an understanding-aimed process in which experts from a wide variety of fields share their perspectives with experts from other fields; second, externally, as part of science communication to enable critical debate within society; and third, as a bidirectional interface between science and society, and, thus, as an important part of this critical debate itself. We hope to have lively and insightful exchanges with others engaged in the critical reflection of intelligent systems. We offer practical accounts and insight into our direct and indirect network in both the academic and the non-academic communities, from teaching ethics to computer scientists (through the Ethics for Nerds lecture) to issues of making cyber-physical systems (CPEC SFB) and AI (EIS Project) understandable. At the same time, we welcome critical feedback and suggestions for improvement, both of our internal processes, our communication activities, but also of our bidirectional interface function.