© 2024 by the authors; licensee oekom. This Open Access article is licensed under a Creative Commons Attribution 4.0 International License (CC BY).

TATuP 33/1 (2024), S. 3–3, https://doi.org/10.14512/tatup.33.1.03

published online: 15. 03. 2024

AI (artificial intelligence) systems steer cars, make medical diagnoses, support legal proceedings, and write texts with astonishing speech quality. To do this, AI systems must recognize their environment, distinguish between relevant and less relevant elements, and make diagnoses on the basis of which subsequent actions are initiated.

When humans go through these steps, we speak of decisions. The ability to simulate such decisions and arrive at proper results, e.g., in car traffic, is probably the most striking technical innovation in the field of AI. Depending on the perspective, it is fascinating or scary, or both, how fast progress is being made and that decisions are increasingly delegated to AI systems.

Impact analyses and reflections in this field are challenging. Technology assessment (TA) not only has to analyze market potentials in different areas of application, develop scenarios of market penetration and possible consequences, and survey the perceptions of different groups and stakeholders, as is often the case, but is also faced with ethical problems. These begin with the attribution and distribution of responsibility as soon as decisions are delegated to AI systems. However, they also include possible discrimination by algorithms or the question of which decisions should not be delegated to technical systems, e.g., in the case of military drones.

In 2023, the German Ethics Council took a stand and stated as a criterion for a positive assessment of AI that AI systems should extend human autonomy and freedom and not limit or even replace them. What this means in concrete terms must, of course, be spelled out individually for each area of application. Ethics is not enough for this; the respective empirical context must be adequately considered. This is the inter- and transdisciplinary strength of TA, which understands the consequences of technology in general and in the field of AI in particular not simply as the consequences of technology, but as the interplay of technical features and human behavior. As the Special topic of this TATuP issue shows, TA must therefore not narrow its focus to AI as a technology, but must take into account the interactions with human behavior.

Armin Grunwald

Armin Grunwald

Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Karlsruhe, DE (armin.grunwald@kit.edu)