https://www.tatup.de/index.php/tatup/issue/feed TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 2024-04-26T10:05:31+02:00 Dr. Ulrich Ufer ulrich.ufer@kit.edu Open Journal Systems <p>TATuP - Journal for Technology Assessment in Theory and Practice is peer reviewed and open access. Its scope covers the interdisciplinary scientific field of technology assessment in Europe, with a focus on Germany, Austria and Switzerland, and worldwide. The journal publishes contributions from fields of research related to technology assessment, such as systems analysis, risk assessment, practical ethics, sustainabilityand innovation studies, or foresight.</p> <p>TATuP is owned, funded, managed and edited by the <a href="https://www.itas.kit.edu/english/tatup.php" target="_blank" rel="noopener">Institute for Technology Assessment and Systems Analysis (ITAS)</a> at Karlsruhe Institute of Technology (KIT), on whose behalf it is published at regular intervals three times a year in printed and electronic form by the publishing house <a href="https://www.oekom.de/zeitschrift/tatup-8" target="_blank" rel="noopener">oekom - Gesellschaft für ökologische Kommunikation mbH, Munich.</a></p> <p><strong>Journal history:</strong> TATuP has been published continuously since 1992. Beginning with issue 2017/1-2, the journal has been relaunched in cooperation with oekom verlag in Munich under its new name TATuP – Journal forTechnology Assessment in Theory and Practice (ISSN 2568-020X, eISSN 2567-8833) as a peer-reviewed open access journal.</p> <p>TATuP welcomes contributions in English and German language, all scientific articles include English language abstracts.</p> <p> </p> https://www.tatup.de/index.php/tatup/article/view/7094 Editorial 2024-03-12T08:18:30+01:00 Armin Grunwald armin.grunwald@kit.edu 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Armin Grunwald https://www.tatup.de/index.php/tatup/article/view/7105 Artificial intelligence out of control?: Interview with Karl von Wendt 2024-03-15T06:39:18+01:00 Karl von Wendt vonwendt@yahoo.com Reinhard Heil reinhard.heil@kit.edu 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Karl von Wendt, Reinhard Heil https://www.tatup.de/index.php/tatup/article/view/7095 TA Focus 33/1 (2024): News for the TA community 2024-03-12T08:29:04+01:00 Jonas Moosmüller jonas.moosmueller@kit.edu 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Jonas Moosmüller https://www.tatup.de/index.php/tatup/article/view/7103 AI for decision support: What are possible futures, social impacts, regulatory options, ethical conundrums and agency constellations? 2024-03-13T14:50:52+01:00 Diana Schneider diana.schneider@isi.fraunhofer.de Karsten Weber karsten.weber@oth-regensburg.de <p>Although artificial intelligence (AI) and automated decision-making systems have been around for some time, they have only recently gained in importance as they are now actually being used and are no longer just the subject of research. AI to support decision-making is thus affecting ever larger parts of society, creating technical, but above all ethical, legal, and societal challenges, as decisions can now be made by machines that were previously the responsibility of humans. This introduction provides an overview of attempts to regulate AI and addresses key challenges that arise when integrating AI systems into human decision-making. The Special topic brings together research articles that present societal challenges, ethical issues, stakeholders, and possible futures of AI use for decision support in healthcare, the legal system, and border control.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Diana Schneider, Karsten Weber (Special topic editors) https://www.tatup.de/index.php/tatup/article/view/7111 TATuPDates 33/1 (2024): News from the editorial office 2024-03-13T16:09:44+01:00 TATuP Redaktion redaktion@tatup.de 2024-03-15T00:00:00+01:00 Copyright (c) 2024 TATuP Redaktion https://www.tatup.de/index.php/tatup/article/view/7096 AI‑based decision support systems and society: An opening statement 2024-03-15T06:39:55+01:00 Diana Schneider diana.schneider@isi.fraunhofer.de Karsten Weber karsten.weber@oth-regensburg.de <p>Although artificial intelligence (AI) and automated decision-making systems have been around for some time, they have only recently gained in importance as they are now actually being used and are no longer just the subject of research. AI to support decision-making is thus affecting ever larger parts of society, creating technical, but above all ethical, legal, and societal challenges, as decisions can now be made by machines that were previously the responsibility of humans. This introduction provides an overview of attempts to regulate AI and addresses key challenges that arise when integrating AI systems into human decision-making. The Special topic brings together research articles that present societal challenges, ethical issues, stakeholders, and possible futures of AI use for decision support in healthcare, the legal system, and border control.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Diana Schneider, Karsten Weber https://www.tatup.de/index.php/tatup/article/view/7097 Cui bono? Judicial decision-making in the era of AI: A qualitative study on the expectations of judges in Germany 2024-03-15T06:39:51+01:00 Anna-Katharina Dhungel annakatharina.dhungel@uni-luebeck.de Moreen Heine moreen.heine@uni-luebeck.de <p>Despite substantial artificial intelligence (AI) research in various domains, limited attention has been given to its impact on the judiciary, and studies directly involving judges are rare. We address this gap by using 20 in-depth interviews to investigate German judges’ perspectives on AI. The exploratory study examines (1) the integration of AI in court proceedings by 2040, (2) the impact of increased use of AI on the role and independence of judges, and (3) whether AI decisions should supersede human judgments if they were superior to them. The findings reveal an expected trend toward further court digitalization and various AI use scenarios. Notably, opinions differ on the influence of AI on judicial independence and the precedence of machine decisions over human judgments. Overall, the judges surveyed hold diverse perspectives without a clear trend emerging, although a tendency toward a positive and less critical evaluation of AI in the judiciary is discernible.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Anna-Katharina Dhungel, Moreen Heine https://www.tatup.de/index.php/tatup/article/view/7098 AI and access to justice: How AI legal advisors can reduce economic and shame-based barriers to justice 2024-03-15T06:39:46+01:00 Brandon Long brlong@bgsu.edu Amitabha Palmer philosophami@gmail.com <p>ChatGPT&nbsp;– a large language model&nbsp;– recently passed the U.S. bar exam. The startling rise and power of generative artificial intelligence (AI) systems such as ChatGPT lead us to consider whether and how more specialized systems could be used to overcome existing barriers to the legal system. Such systems could be employed in either of the two major stages of the pursuit of justice: preliminary information gathering and formal engagement with the state’s legal institutions and professionals. We focus on the former and argue that developing and deploying publicly funded AI legal advisors can reduce economic and shame-based cultural barriers to the information-gathering stage of pursuing justice.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Brandon Long, Amitabha Palmer https://www.tatup.de/index.php/tatup/article/view/7099 Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing 2024-03-15T06:39:42+01:00 Giovana Lopes giovana.figueiredo@unibo.it <p>As arbiters of law and fact, judges are supposed to decide cases impartially, basing their decisions on authoritative legal sources and not being influenced by irrelevant factors. Empirical evidence, however, shows that judges are often influenced by implicit biases, which can affect the impartiality of their judgment and pose a threat to the right to a fair trial. In recent years, artificial intelligence (AI) has been increasingly used for a variety of applications in the public domain, often with the promise of being more accurate and objective than biased human decision-makers. Given this backdrop, this research article identifies how AI is being deployed by courts, mainly as decision-support tools for judges. It assesses the potential and limitations of these tools, focusing on their use for risk assessment. Further, the article shows how AI can be used as a debiasing tool, i. e., to detect patterns of bias in judicial decisions, allowing for corrective measures to be taken. Finally, it assesses the mechanisms and benefits of such use.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Giovana Lopes https://www.tatup.de/index.php/tatup/article/view/7100 Borderline decisions?: Lack of justification for automatic deception detection at EU borders 2024-03-15T06:39:39+01:00 Daniel Minkin hpcdmink@hlrs.de Lou Therese Brandner lou.brandner@uni-tuebingen.de <p>Between 2016 and 2019, the European Union funded the development and testing of a system called “iBorderCtrl”, which aims to help detect illegal migration. Part of iBorderCtrl is an automatic deception detection system (ADDS): Using artificial intelligence, ADDS is designed to calculate the probability of deception by analyzing subtle facial expressions to support the decision-making of border guards. This text explains the operating principle of ADDS and its theoretical foundations. Against this background, possible deficits in the justification of the use of this system are pointed out. Finally, based on empirical findings, potential societal ramifications of an unjustified use of ADDS are discussed.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Daniel Minkin, Lou Therese Brandnder https://www.tatup.de/index.php/tatup/article/view/7101 Situativity, functionality and trust: Results of a scenario-based interview study on the explainability of AI in medicine 2024-03-15T06:39:35+01:00 Manuela Marquardt manuela.marquardt@charite.de Philipp Graf philipp.graf@hm.edu Eva Jansen eva.jansen@charite.de Stefan Hillmann stefan.hillmann@tu-berlin.de Jan-Niklas Voigt-Antons jan-niklas.voigt-antons@hshl.de <p>A central requirement for the use of artificial intelligence (AI) in medicine is its explainability, i.&nbsp;e., the provision of addressee-oriented information about its functioning. This leads to the question of how socially adequate explainability can be designed. To identify evaluation factors, we interviewed healthcare stakeholders about two scenarios: diagnostics and documentation. The scenarios vary the influence that an AI system has on decision-making through the interaction design and the amount of data processed. We present key evaluation factors for explainability at the interactional and procedural levels. Explainability must not interfere situationally in the doctor-patient conversation and question the professional role. At the same time, explainability functionally legitimizes an AI system as a second opinion and is central to building trust. A virtual embodiment of the AI system is advantageous for language-based explanations</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Manuela Marquardt, Philipp Graf, Eva Jansen, Stefan Hillmann, Jan-Niklas Voigt-Antons https://www.tatup.de/index.php/tatup/article/view/7102 Artificial intelligence in melanoma diagnosis: Three scenarios, shifts in competencies, need for regulation, and reconciling dissent between humans and AI 2024-03-15T06:39:32+01:00 Jan C. Zoellick jan.zoellick@charite.de Hans Drexler hans.drexler@fau.de Konstantin Drexler konstantin.drexler@klinik.uni-regensburg.de <p>Tools based on machine learning (so-called artificial intelligence, AI) are increasingly being developed to diagnose malignant melanoma in dermatology. This contribution discusses (1) three scenarios for the use of AI in different medical settings, (2) shifts in competencies from dermatologists to non-specialists and empowered patients, (3) regulatory frameworks to ensure safety and effectiveness and their consequences for AI tools, and (4) cognitive dissonance and potential delegation of human decision-making to AI. We conclude that AI systems should not replace human medical expertise but play a supporting role. We identify needs for regulation and provide recommendations for action to help all (human) actors navigate safely through the choppy waters of this emerging market. Potential dilemmas arise when AI tools provide diagnoses that conflict with human medical expertise. Reconciling these conflicts will be a major challenge.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Jan C. Zoellick, Hans Drexler, Konstantin Drexler https://www.tatup.de/index.php/tatup/article/view/7106 Book review: Coeckelbergh, Mark (2022): The political philosophy of AI 2024-03-15T06:39:09+01:00 Michael W. Schmidt michael.schmidt@kit.edu 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Michael W. Schmidt https://www.tatup.de/index.php/tatup/article/view/7107 Book review: Timon, McPhearson; Nadja, Kabisch; Niki, Frantzeskaki (eds.) (2023): Nature-based solutions for cities 2024-03-15T06:39:04+01:00 Jaewon Son jae.son@partner.kit.edu Somidh Saha somidh.saha@kit.edu 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Jaewon Son, Somidh Saha https://www.tatup.de/index.php/tatup/article/view/7108 Meeting report: “Religion and technology in an era of rapid digital and climate change”. Conference, 2023, Chennai, IN (hybrid) 2024-03-15T06:39:01+01:00 Axel Siegemund siegemund@kt.rwth-aachen.de John Bosco Lourdusamy jblsamy@iitm.ac.in Johann Fiedler johann.fiedler@kt.rwth-aachen.de Renny Thomas renny@iiserb.ac.in 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Axel Siegemund, John Bosco Lourdusamy, Johann Fiedler, Renny Thomas https://www.tatup.de/index.php/tatup/article/view/7109 Meeting report: “Generative artificial intelligence – Opportunities, risks and policy challenges”. EPTA Conference, 2023, Barcelona, ES 2024-03-13T15:58:07+01:00 Steffen Albrecht steffen.albrecht@kit.edu Reinhard Grünwald gruenwald@tab-beim-bundestag.de 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Steffen Albrecht, Reinhard Grünwald https://www.tatup.de/index.php/tatup/article/view/7110 Meeting report: “PartWiss23”. Conference, 2023, Chemnitz, DE 2024-03-13T16:03:16+01:00 Bettina Brohmann b.brohmann@oeko.de Regina Rhodius r.rhodius@oeko.de Melanie Mbah m.mbah@oeko.de 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Bettina Brohmann, Regina Rhodius, Melanie Mbah https://www.tatup.de/index.php/tatup/article/view/7104 Decarbonization of Berlin’s transport sector: Citizens’ assembly on scientifically developed scenarios 2024-04-26T10:05:31+02:00 Moritz Kreuschner kreuschner@vsp.tu-berlin.de Nora Bonatz nora.bonatz@campus.tu-berlin.de Tilmann Schlenther schlenther@vsp.tu-berlin.de Hamid Mostofi mostofidarbani@tu-berlin.de Hans-Liudger Dienel hans-liudger.dienel@tu-berlin.de Kai Nagel nagel@vsp.tu-berlin.de <p>As part of this work, a citizens’ assembly was set up to obtain feedback on measures to decarbonize Berlin’s transport sector. The general goal of decarbonization was widely supported. Based on existing studies, discussions were held for private passenger transport, commercial passenger transport, freight transport, and all other transport modes. Agreement could be found on the decarbonization paths for all segments, with the exception of private passenger transport. For the latter, pull measures, like improvements of public transport or bike lanes, are generally accepted, going along with a general desire to switch to non-fossil fuels and to make the transport system in Berlin less car-oriented. Simulations show that pull measures will be far from sufficient to decarbonize private passenger transport. However, more effective push measures, such as higher prices or ban on fossil-fueled vehicles, did not yield clear majorities for this segment.</p> 2024-03-15T00:00:00+01:00 Copyright (c) 2024 Moritz Kreuschner, Nora Bonatz, Tilmann Schlenther, Hamid Mostofi, Hans-Liudger Dienel, Kai Nagel