“Don’t let me be misunderstood”

Critical AI literacy for the constructive use of AI technology





deep automation bias, AI assessment, machine learning, uncertainty, awareness


Research and development as well as societal debates on the risks of artificial intelligence (AI) often focus on crucial but impractical ethical issues or on technocratic approaches to managing societal and ethical risks with technology. To overcome this, more practical, problem-oriented analytical perspectives on the risks of AI are needed. This article proposes an approach that focuses on a meta-risk inherent in AI systems: deep automation bias. It is assumed that the mismatch between system behavior and user practice in specific application contexts due to AI‑based automation is a key trigger for bias and other societal risks. The article presents the main factors of (deep) automation bias and outlines a framework providing indicators for the detection of deep automation bias ultimately triggered by such a mismatch. This approach intends to strengthen problem awareness and critical AI literacy and thereby create some practial use.




How to Cite

Strauß S. “Don’t let me be misunderstood”: Critical AI literacy for the constructive use of AI technology. TATuP [Internet]. 2021 Dec. 20 [cited 2022 Jan. 28];30(3):44-9. Available from: https://www.tatup.de/index.php/tatup/article/view/6930