Research Article

Borderline decisions?: Lack of justification for automatic deception detection at EU borders

Daniel Minkin*, 1 , Lou Therese Brandner2

* Corresponding author: hpcdmink@hlrs.de

1 High-Performance Computing Center Stuttgart, University of Stuttgart, Stuttgart, DE

2 International Center for Ethics in the Sciences and Humanities, University of Tübingen, Tübingen, DE

Abstract  Between 2016 and 2019, the European Union funded the development and testing of a system called “iBorderCtrl”, which aims to help detect illegal migration. Part of iBorderCtrl is an automatic deception detection system (ADDS): Using artificial intelligence, ADDS is designed to calculate the probability of deception by analyzing subtle facial expressions to support the decision-making of border guards. This text explains the operating principle of ADDS and its theoretical foundations. Against this background, possible deficits in the justification of the use of this system are pointed out. Finally, based on empirical findings, potential societal ramifications of an unjustified use of ADDS are discussed.

Grenzwertige Entscheidungen?: Rechtfertigungsdefizite der automatischen Täuschungserkennung an EU-Grenzen

Zusammenfassung  Von 2016 bis 2019 wurde mit Fördergeldern der Europäischen Union ein System namens „iBorderCtrl“ entwickelt und getestet. Dieses System soll dabei helfen, illegale Migration zu erkennen. Eine Komponente von iBorderCtrl ist das sog. Automatic-Deception-Detection-System (ADDS). Mithilfe künstlicher Intelligenz soll das ADDS subtile Gesichtsausdrücke analysieren, um die Wahrscheinlichkeit einer Täuschung zu berechnen, die Grenzschutzbeamt*innen als Entscheidungshilfe nutzen können. Im vorliegenden Beitrag werden das Funktionsprinzip des ADDS sowie seine theoretische Basis erläutert. Vor diesem Hintergrund wird auf mögliche Defizite in der Rechtfertigung des Einsatzes von ADDS hingewiesen. Auf der Grundlage empirischer Untersuchungen werden schließlich mögliche soziale Auswirkungen eines ungerechtfertigten Einsatzes von ADDS diskutiert.

Keywords  automatic deception detection, machine learning, emotion recognition, border control, trust

This article is part of the Special topic “AI for decision support: What are possible futures, social impacts, regulatory options, ethical conundrums and agency constellations?,” edited by D. Schneider and K. Weber. https://doi.org/10.14512/tatup.33.1.08

© 2024 by the authors; licensee oekom. This Open Access article is licensed under a Creative Commons Attribution 4.0 International License (CC BY).

TATuP 33/1 (2024), S. 34–40, https://doi.org/10.14512/tatup.33.1.34

Received: 22. 08. 2023; revised version accepted: 03. 01. 2024; published online: 15. 03. 2024 (peer review)

Introduction

The potential of artificial intelligence (AI) to revolutionize border management has been recognized for some time (Beduschi 2020). In the European Union (EU), different AI-based technologies for border control are either already in use or are being tested for future deployment, such as biometric identification and verification, risk assessment, or emotion detection (Dumbrava 2021). This contribution focuses on a subset of the latter: The so-called automatic deception detection system (ADDS), part of the iBorderCtrl project funded by the EU, was developed to detect cases of illegal border crossing by video-interviewing travelers to analyze their facial microexpressions for indicators of deceit. The technology is intended to support border guards in their decision-making process, providing recommendations in the form of risk assessments regarding individual travelers.

Who gets to cross European borders, an already complex social issue with major implications for migrants and society at large, thus becomes embedded in discourses around AI-based decision support. These discourses pertain not only to one state or one scientific discipline. A responsible use of AI-based systems at border crossing points requires responsible policy-making based on an interdisciplinary and transnational perspective. Against this background, in this paper, we bring together epistemological, technological, and social science arguments in order to contribute to an informed assessment of the technology.

After giving a more in-depth description of ADDS, its purpose and theoretical basis, we focus on two interlinked questions:

  1. What are the concerns about the use of ADDS and are they warranted? We examine epistemological and empirical arguments against the deployment of ADDS. The first group of arguments targets the system’s theoretical foundation, contending that ADDS rests on a scientifically unfounded basis. The second group attempts to show that the mechanisms underlying ADDS are not sufficiently accurate.
  2. Having discussed the above concerns, we turn to the second question: Given the concerns outlined, what would be the social implications of using ADDS in terms of public trust? The concept of public trust is related to various dimensions of technology assessment (TA) such as trustworthy technology, acceptance of new technologies, as well as the promotion and maintenance of trust in technology-related policy-making (Weydner-Volkmann 2021). We address these aspects at the end of the paper, arguing in favor of more transparency in the implementation of systems such as ADDS.

The automatic deception detection system

Purpose and working principle

ADDS is a machine learning (ML) based system designed to identify deception[1] when crossing a state border (O’Shea et al. 2018; Podoletz 2023). It has been dubbed an “AI Polygraph” or “AI lie detector” (Kaminski 2019, p. 178). ADDS is part of iBorderCtrl, which is designed to facilitate and accelerate the registration and control of travelers coming to the EU, including refugees. It has already been tested in Hungary, Latvia, and Greece, with the test phase ending in 2019. Currently, it is not known whether and in what form iBorderCtrl will be deployed; however, with the EU extensively testing this kind of technology and funding several related border control and surveillance projects (iBorderCtrl 2023), critically examining these systems remains relevant.

iBorderCtrl works in the two stages of pre-registration and border crossing. It includes procedures such as biometric identification, document matching, risk analysis, and ADDS, on which this contribution focuses. During pre-registration, travelers undergo an online video interview with a police avatar. ADDS analyzes the recorded interviews, more specifically combinations of the travelers’ microexpressions, very subtle facial expressions, normally invisible to the naked eye, to quantify the probability of deceit. O’Shea et al. (2018, p. 3) point out a difference between microgestures and microexpressions: The former are “more fine-grained and require no functional psychological model of why the behaviour has taken place”. However, since the original description by Rothwell et al. (2006, p. 759) uses the term “microexpression”, we follow this publication. Based on the microexpression analysis and other components of the iBorderCtrl system, a risk estimation regarding the traveler is provided. The system is intended as a human-in-the-loop system: When travelers attempt to cross the border, a border guard makes the final decision after performing a security check against the background of the data provided by the system.

The main component of ADDS is a subsystem called “Silent Talker” (ST) (iBorderCtrl 2018, p. 15). By using several artificial neural networks, ST learns to recognize combinations of microexpressions (by means of supervised or unsupervised methods). The actual classification of these combinations as truthful or deceptive is based on a conceptual model of non-verbal behavior (NVB): “This model assumes that certain mental states associated with deceptive behavior will drive an interviewee’s NVB when deceiving. These include Stress or Anxiety (factors in psychological Arousal), Cognitive Load, Behavioral Control and Duping Delight” (O’Shea 2018, pp. 3–4). It is worth noting that a combination’s classification as deceptive cannot be regarded as proof of deception but as a probabilistic result obtained by inductive learning.

The assumption that microexpressions are capable of revealing deceptive intentions thus forms the theoretical basis of ST, which will be further discussed in more detail in the next chapter.

Theoretical foundation

Deception consists of or involves mental states, especially intentions, while microexpressions are a kind of behavior open to intersubjective investigation. ST and ADDS analyze the latter and thereby provide a basis for human actors to obtain information about the former. Against this backdrop, the question arises as to which possible combinations of microexpressions can indicate an intention to deceive on the part of the subject. In other words, the development of deception detection systems requires a psychological theory about the connections between microexpressions and mental states. In their description of the ST, the inventors explicitly state that they take some key elements from the psychological work of Paul Ekman, such as Ekman’s definition of microexpression (Rothwell et al. 2006, p. 759).

Ekman became famous for his cross-cultural studies of emotional expression. In the 1960s, for instance, he asked indigenous people in New Guinea, at that time largely isolated from the Western world, to assign terms like ‘sad’ to pictures of Europeans expressing different emotions. Experiments with various cultures as well as later studies (Elfenbein and Ambady 2002) resulted in relatively high accuracy rates, leading to the assumption of universal emotional expression. Regarding universal microexpressions, Ekman analyzed video recordings of proven liars frame by frame, finding that subjects with deceptive intentions could not consciously control all facial muscle movements while experiencing a particular emotion (Ekman 1985, p. 133).

Ekman’s theory indicates, therefore, that some unconscious and uncontrollable facial expressions can provide evidence of deception. However, this idea is controversial; in the coming chapter, we will explore criticisms of microexpression analysis and ADDS.

Criticism

Deception detection has been criticized as ‘pseudoscience’ (Whittaker et al. 2018). Against the background of such normative characterizations, we want to assess if the use of ADDS is justified, focusing on the system’s theoretical background and its accuracy; the first aspect is the subject of an epistemological criticism, the second of an empirical one.

Epistemological criticism

The psychological foundation of ST and ADDS has been criticized as flawed, which, according to the systems’ opponents, means their use cannot be justified. A major part of this criticism is an observed lack of scientific consensus on the assumption that deceptive intentions can be derived from microexpressions. In general terms, the epistemological criticism states that the use of deception detection systems is not justified unless there is widespread agreement on their theoretical foundations (Podoletz 2023, p. 1071). Given that Ekman assumes one can derive deceptive intentions from microexpressions, it is reasonable to ask whether there is disagreement among experts about his theory. By the term ‘disagreement’, we mean the incompatibility of other psychological theories with Ekman’s position.

A literature review shows that, indeed, there are disagreements on different aspects and at different levels of abstraction: First, the interpretation of microexpressions as indicators of deceit is not conclusive. Psychologists have made other reasonable assumptions on what microexpressions might indicate (Zhang and Arandjelović 2021). This disagreement touches the very core of ST’s theoretical foundation, for if microexpressions are not indicative of deceit but of suppressed emotions, the system does not measure what it is supposed to. Second, at a deeper level, we find psychological disagreement on whether facial analysis can provide a universal reading of emotions as fixed states (Whittaker et al. 2018, p. 14). Although studies report some advancement in the field (Varghese et al. 2015), it has for instance been argued that emotional expressions are contingent on cultural and social factors (Feldman Barrett et al. 2019), which fundamentally challenges the basis of Ekman’s theory. Lastly, there is no consensus regarding the appropriate classification of emotions; Ekman himself conducted a survey among experts in psychology of emotion to identify the preferred classification model in the field. His results suggest that only 16 % of experts favor the model his theory implies (Ekman 2016, p. 32). The theoretical basis of ST thus seems to be approved only by a minority.

Ekman’s theory indicates that some unconscious and uncontrollable facial expressions can provide evidence of deception.

While this line of thought appears convincing, there are some limitations. To start with, although it is unclear how much disagreement is too much, it seems uncontroversial that perfect consensus is not necessary for justifying the use of a system. On the other hand, even if there were a perfect consensus on the theory as a basis of a system, the use of this system would not be justified without a sufficiently high level of accuracy. This indicates that a lack of consensus on a system’s theoretical basis does not necessarily impact the justification of its use; it can be argued that it is irrelevant whether ST is based on a controversial theoretical basis as long as it is able to distinguish deceptive statements from truthful ones with sufficient accuracy. In the third part of this paper, we will revisit the relevance of the theoretical foundation.

Empirical criticism

The empirical criticism of ADDS considers the accuracy of such systems (Sánchez-Monedero and Dencik 2022). As part of iBorderCtrl, ADDS is intended to support the decision-making of border guards. The use of such subsidiary systems, particularly in high-risk application contexts like border control, is arguably only justified if their accuracy rates exceed those of human experts. Since human actors are expected to make the final decision, the systems would otherwise have no added value; in the worst case, they would direct human experts towards wrong decisions due to the perceived authority of automated outputs (Helm and Hagendorff 2021).[2]

The concern is that in the case of ST this condition is not met: Its accuracy rate has been reported to be 63 to 70 % (Rothwell et al. 2006) and 74.6 % (O’Shea et al. 2018). Empirical findings suggest that while human performance is below these values for untrained subjects, ST’s performance does not exceed human experts. According to a meta-analysis, subjects without special training perform only slightly better than chance (54 %) when attempting to distinguish deceptive from truthful statements (Bond and DePaulo 2006). However, the mean accuracy of trained parole officers has been found to be 76.7 % (Porter et al. 2000) and thus beyond the highest values found for ST.

A second empirical problem is that ST was trained and tested on a surprisingly small number of subjects under controlled experimental conditions. Rothwell et al. worked with 39 subjects and O’Shea et al. with 30, with most of them being male Europeans. Since the aim of this system is to control non-EU citizens, Rothwell et al. openly admit this could lead to data bias (Rothwell et al. 2006, p. 768): Underrepresenting certain groups – such as people of color or women – in AI training datasets can lead to unreliable assessments regarding individuals belonging to these groups and therefore to discriminatory outcomes (Brandner and Hirsbrunner 2023; Selbst 2017). The results of the real-life tests in Hungary, Latvia, and Greece have not been publicly disclosed, and it is thus so far unclear whether they show biases.

These concerns are independent of the epistemological criticism since they relate solely to the accuracy of the system in question. Thus, even ignoring potential flaws in the theoretical foundations, there seems to be no sufficient justification for deploying ADDS. Taking into account the addressed empirical criticism, the next chapter assesses the potential social implications of ADDS. To do so, we build on empirical studies on public attitudes toward the police and AI-based technology.

Social implications

Trust in automated deception detection

AI-based policing and border control evoke divergent public attitudes. For controversial real-time facial recognition technology, acceptance appears to depend on the general trust invested in the police, with a nearly fifty/fifty split regarding the question if the police should be able to use this technology (Bradford et al. 2020). While trust is a much debated, complex notion, someone trusting in an institution like the police can be described as “having confidence that the institution is reliable, observes rules and regulations, works well, and serves the general interest” (Devos et al. 2002, p. 484). Trust in the police varies greatly between European nations and regions, particularly between Scandinavia (high trust) and Eastern Europe (low trust) (Pfister 2020).

Public attitudes toward AI decision-making and support are far from uniform and depend on multiple interrelated factors, such as application context, geographical and cultural differences, or other (perceived) characteristics of the respective technology, such as fairness and transparency (Starke et al. 2022). It has been found that in the application context of justice, automated decisions are perceived as less risky and fairer than decisions made by human experts (Araujo et al. 2020). Others, however, suggest a preference for decisions made by human actors in a policing context (Hobson et al. 2023). Human decisions to accept or reject AI suggestions are furthermore not only contingent on the confidence vested in the system but also on personal self-confidence (Chong et al. 2022). Both over- and undertrust can thus lead to problematic outcomes in high-risk situations such as border control, where humans are meant to make reliable final decisions based on AI suggestions. Given the lack of social consensus, the coming paragraphs assess the implications of both low and high trust in a system like ADDS.

Implications of low trust

Public trust is fundamental for the societal acceptance of AI-based systems and therefore their sustainable adoption (Gillespie et al. 2023). The deployment of unreliable AI-based technology can actively lead to a loss of trust in public institutions (Starke et al. 2022). While, to our knowledge, trust in automated deception detection in a border control context is yet to be studied empirically, automated emotion recognition appears to predominantly evoke negative attitudes: Interviewees describe it as “invasive” and “scary” (Andalibi and Buss 2020, p. 6). Those who believe emotion detection to be accurate can also perceive this accuracy as concerning, i. e., as a threat to human agency (Grill and Andalibi 2022). Others question if the technology can work at all due to the complexity of human emotions. This indicates that academic criticisms of the theoretical basis and empirical accuracy of ADDS are also reflected in citizen concerns.

While ADDS is meant to control non-EU nationals and therefore poses no immediate personal risk to EU citizens, based on the qualitative studies just mentioned it seems plausible to assume that many would fundamentally oppose the use of deception detection in a high-risk setting such as border control. Particularly persons with a general “anti-surveillance” viewpoint (Ezzeddine et al. 2023, p. 869) emphasize the importance of personal freedom over security and oppose all police AI that might flag individuals as suspicious, regardless of who the system is used on. Those generally critical and distrustful of AI policing might perceive the use of ADDS at European borders as threatening to human agency rather than as a trustworthy security measure. Given the already uneven trust in the police within the EU, deploying ADDS might further erode this shaky ground, leading to increased dissonance between nations and political unrest.

The perceived transparency of systems impacts trust, with higher transparency entailing more trust (Aysolmaz et al. 2023). As has been shown in TA studies, trust-building communication cannot only consist of conveying technical aspects such as reliability (Weydner-Volkmann 2021). Meaningful transparency should also include justifying the logic, reasoning, and legality of (semi-)automated decisions (Malgieri 2020). In the case of ADDS, this would inevitably include disclosing and explaining the technology’s contentious theoretical basis and potentially underwhelming accuracy; given the described criticisms that question both if and how the system works, a sufficient justification of its use to the public would thus be challenging.

Implications of high trust

Freedom of movement of EU citizens is one of the cornerstones of the European project. Yet, with the Russian invasion of Ukraine, high immigration and growing support for far-right parties, even internal borders such as Germany-Poland undergo more rigorous checks, while the majority of EU citizens support stricter external border protection and up to a third think individual nations should control their own borders (BrusselsReport.eu 2022). As opposed to groups who fundamentally oppose AI policing, parts of the population passively trust any police action on principle (Bradford et al. 2020), which might lead them to be more accepting of technologies they would oppose in other contexts. Given that the iBorderCtrl system is meant to control non-European migrants while EU citizens maintain privacy, a "Not Me group" (Ezzeddine et al. 2023, p. 872) might also be prevalent in this case; these individuals endorse AI policing for their personal safety but not on their own data and might therefore trust in systems like ADDS.

Deploying automatic deception detection systems might further erode the shaky ground of trust in the police within the EU, leading to increased dissonance between nations and political unrest.

Comprehension is not a necessary prerequisite for trust. Instead, people often trust in things they find too complex to understand (Reinhardt 2023). Considering its justification issues, trust in lie detection technology might be misplaced since unreliable systems can lead to biased decisions; it has for instance been found that emotion detection systems can have racist bias (Rhue 2018). Not only underrepresenting populations can lead to discriminatory outcomes but also overrepresenting already marginalized groups (Bacchini and Lorusso 2019). If predominantly non-EU citizens’ data are fed into ADDS, the system might for instance learn to disproportionately classify the microexpressions of individuals of non-European descent as deceptive. The use of ADDS for border control might not only continue inequalities and discriminatory dynamics but, by automating them, embed them further into the social fabric of an already divided and crisis-ridden Europe.

The question of transparency is again relevant here, but in the context of actively fostering distrust (Ammicht Quinn 2015). Distrust can mobilize citizens to refuse using certain technologies or actively protest them (Büscher and Sumpf 2015), which can incentivize governments and industries to more carefully assess the potential harms of deploying systems such as ADDS. In the case of ADDS, EU citizens’ data would not be analyzed; for EU citizens, expressing distrust in the form of refusal is therefore not possible and those with a "Not Me" perspective might overall not be interested in protesting the technology. However, deploying similar systems at EU borders has the potential to perpetuate and aggravate harmful social inequalities and therefore affect all parts of society. EU citizens should thus be comprehensively informed about the described risks and uncertainties in order to facilitate reasonable distrust in – and therefore resistance against – potentially unjustified technology.

Conclusion

Both immigration and AI-based systems are complex and controversial societal issues. With iBorderCtrl, the EU has attempted to find solutions to the former via the latter. This paper has highlighted the importance of justification for the use of such a system, particularly considering public trust. At the current state, we observe a lack of justification on two fundamental levels: The theoretical basis of deception detection is highly contentious on an epistemological level. From an empirical perspective, varying accuracy rates achieved in small-sample studies do not seem sufficient to justify the usefulness of the technology compared to human experts. At the same time, public opinions and trust regarding AI policing are already divided. A responsible European policy towards related systems must consider the criticism outlined as well as transparently disclose it to the public to prevent both a further loss of trust in public institutions and blind trust in AI-based border control.

Footnote

[1] Both in the description of ADDS by iBorderCtrl and in research articles examining the accuracy of the system, the concept of deception is not characterized in detail (Rothwell et al. 2006, iBorderCtrl 2018, O’Shea et al. 2018). These texts appear to use the term “deception” synonymously with “lying” (O’Shea 2018, p. 4). O’Shea et al. tested the system using simulated scenarios in which the subjects had the task of smuggling various illegal substances such as drugs or infectious materials.

[2] The studies and texts describing ST do not address such risks (Rothwell et al. 2006, O’Shea et al. 2018, iBorderCtrl 2018) but focus on advantages (e.g. compared to conventional polygraphs) and limitations from a technological perspective.

Funding  This article is based on research in the projects “Trust in Information”, funded by the Ministry of Science, Research and Arts Baden-Württemberg, and “PEGASUS”, funded by the German Federal Ministry of Education and Research.

Competing interests  The authors declare no competing interests.

References

Ammicht Quinn, Regina (2015): Trust generating security generating trust. An ethical perspective on a secularized dicourse. In: Behemoth. A Journal on Civilisation 8 (1), pp. 109–125. https://doi.org/10.6094/behemoth.2015.8.1.855

Andalibi, Nazanin; Buss, Justin (2020): The human in emotion recognition on social media. Attitudes, outcomes, risks. In: Regina Bernhaupt et al. (eds.): Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). New York, NY: Association for Computing Machinery, pp. 1–16. https://doi.org/10.1145/3313831.3376680

Araujo, Theo; Helberger, Natali; Kruikemeier, Sanne; de Vreese, Claes (2020): In AI we trust? Perceptions about automated decision-making by artificial intelligence. In: AI & Society 35 (3), pp. 611–623. https://doi.org/10.1007/s00146-019-00931-w

Aysolmaz, Banu; Müller, Rudolf; Meacham, Darian (2023): The public perceptions of algorithmic decision-making systems. Results from a large-scale survey. In: Telematics and Informatics 79, p. 101954. https://doi.org/10.1016/j.tele.2023.101954

Bacchini, Fabio; Lorusso, Ludovica (2019): Race, again. How face recognition technology reinforces racial discrimination. In: Journal of Information, Communication and Ethics in Society 17 (3), pp. 321–335. https://doi.org/10.1108/jices-05-2018-0050

Beduschi, Ana (2020): International migration management in the age of artificial intelligence. Migration Studies 9 (3), pp. 576–596. https://doi.org/10.1093/migration/mnaa003

Bond, Charles; DePaulo, Bella (2006): Accuracy of deception judgments. In: Personality and Social Psychology Review 10 (3), pp. 214–234. https://doi.org/10.1207/s15327957pspr1003_2

Bradford, Ben; Yesberg, Julia; Jackson, Jonathan; Dawson, Paul (2020): Live facial recognition. Trust and legitimacy as predictors of public support for police use of new technology. In: The British Journal of Criminology 60 (6), pp. 1502–1522. https://doi.org/10.1093/bjc/azaa032

Brandner, Lou; Hirsbrunner Simon (2023). Algorithmic fairness in police investigative work. Ethical analysis of machine learning methods for facial recognition. In: TATuP – Journal for Technology Assessment in Theory and Practice 32 (1), pp. 24–29. https://doi.org/10.14512/tatup.32.1.24

BrusselsReport.eu (2022): Poll reveals great unease among Europeans about migration policy. In: Brussels report, 01. 02. 2022. Available online at https://www.brusselsreport.eu/2022/02/01/poll-reveals-great-unease-among-europeans-about-migration-policy/, last accessed 26. 01. 2024.

Büscher, Christian; Sumpf, Patrick (2015): “Trust” and “confidence” as socio-technical problems in the transformation of energy systems. In: Sustainability and Society 5 (34), pp. 1–13. https://doi.org/10.1186/s13705-015-0063-7

Chong, Leah; Zhang, Guanglu; Goucher-Lambert, Kosa; Kotovsky, Kenneth; Cagan, Jonathan (2022): Human confidence in artificial intelligence and in themselves. The evolution and impact of confidence on adoption of AI advice. In: Computers in Human Behavior 127, p. 107018. https://doi.org/10.1016/j.chb.2021.107018

Devos, Thierry; Spini, Dario; Schwartz, Shalom (2002): Conflicts among human values and trust in institutions. In: The British Journal of Social Psychology 41, pp. 481–494. https://doi.org/10.1348/014466602321149849

Dumbrava, Costica (2021): Artificial intelligence at EU borders. Overview of applications and key issues. Brussels: European Parliamentary Research Service. Available online at https://www.europarl.europa.eu/thinktank/en/document/EPRS_IDA(2021)690706, last accessed on 26. 01. 2024.

Ekman, Paul (1985): Telling lies. Clues to deceit in the marketplace, politics and marriage. New York, NY: W. W. Norton and Company.

Ekman, Paul (2016): What scientists who study emotion agree about. In: Perspectives on Psychological Science 11 (1), pp. 31–34. https://doi.org/10.1177/1745691615596992

Elfenbein, Hillary; Ambady, Nalini (2002): On the universality and cultural specificity of emotion recognition. A meta-analysis. In: Psychological Bulletin 128 (2), pp. 203–235. https://doi.org/10.1037/0033-2909.128.2.203

Ezzeddine, Yasmine; Bayerl, Petra; Gibson, Helen (2023): Safety, privacy, or both. Evaluating citizens’ perspectives around artificial intelligence use by police forces. In: Policing and Society 33 (7), pp. 861–876. https://doi.org/10.1080/10439463.2023.2211813

Feldman Barrett, Lisa; Adolphs, Ralph; Marsella, Stacy; Martinez, Aleix; Pollak, Seth (2019): Emotional expressions reconsidered. Challenges to inferring emotion from human facial movements. In: Psychological Science in the Public Interest 20 (1), pp. 1–68. https://doi.org/10.1177/1529100619832930

iBorderCtrl (2018): D7.6 Yearly communication report including communication material. Available online at https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5be014692&appId=PPGMS, last accessed on 26. 01. 2024.

iBorderCtrl (2023): Related projects. Available online at https://web.archive.org/web/20211203233051/https://www.iborderctrl.eu/Related-Projects, last accessed on 26. 01. 2024.

Gillespie, Nicole; Lockey, Steven; Curtis, Caitlin; Pool, Javad; Akbari, Ali (2023): Trust in artificial intelligence. A global study. Brisbane: University of Queensland and KPMG Australia. https://doi.org/10.14264/00d3c94

Grill, Gabriel; Andalibi, Nazanin (2022): Attitudes and folk theories of data subjects on transparency and accuracy in emotion recognition. In: Proceedings of the 2022 ACM on Human-Computer Interaction 6 (CSCW1), pp. 1–35. https://doi.org/10.1145/3512925

Helm, Paula; Hagendorff, Thilo (2021): Beyond the prediction paradigm. Challenges for AI in the struggle against organized crime. In: Law and Contemporary Problems 84 (3), pp. 1–17. Available online at https://scholarship.law.duke.edu/lcp/vol84/iss3/2, last accessed on 26. 01. 2024.

Hobson, Zoë; Yesberg, Julia; Bradford, Ben; Jackson, Jonathan (2023): Artificial fairness? Trust in algorithmic police decision-making. In: Journal of Experimental Criminology 19 (1), pp. 165–189. https://doi.org/10.1007/s11292-021-09484-9

Kaminski, Andreas (2019): Begriffe in Modellen. Die Modellierung von Vertrauen in Computersimulation und maschinellem Lernen im Spiegel der Theoriegeschichte des Vertrauens. In: Nicole Saam, Michael Resch and Andreas Kaminski (eds.): Simulieren und Entscheiden. Wiesbaden: Springer VS, pp. 173–197. https://doi.org/10.1007/978-3-658-26042-2

Malgieri, Gianclaudio (2020): “Just” algorithms. AI justification (beyond explanation) in the GDPR. In: Gianclaudio Malgieri Blog, 14. 12. 2020. Available online at www.gianclaudiomalgieri.eu/2020/12/14/just-algorithms/, last accessed on 26. 01. 2024.

O’Shea, James; Crockett, Keeley; Khan, Wasiq; Kindynis, Philippos; Antoniades, Athos; Boultadakis, Georgios (2018): Intelligent deception detection through machine based interviewing. In: Proceedings of the International Joint Conference on Neural Networks 2018. New York, NY: Institute of Electrical and Electronics Engineers, pp. 1–8. https://doi.org/10.1109/IJCNN.2018.8489392

Pfister, Sabrina (2020): Vertrauen in die Polizei. Wiesbaden: Springer VS. https://doi.org/10.1007/978-3-658-35425-1

Podoletz, Lena (2023): We have to talk about emotional AI and crime. In: AI & Society 38 (3), pp. 1067–1082. https://doi.org/10.1007/s00146-022-01435-w

Porter, Stephen; Woodworth, Mike; Birt, Angela (2000): Truth, lies, and videotape. An investigation of the ability of federal parole officers to detect deception. In: Law and Human Behavior 24 (6), pp. 643–658. https://doi.org/10.1023/a:1005500219657

Reinhardt, Karoline (2023): Trust and trustworthiness in AI ethics. In: AI Ethics 3 (3), pp. 735–744. https://doi.org/10.1007/s43681-022-00200-5

Rhue, Lauren (2018): Racial influence on automated perceptions of emotions. In: SSRN Journal. https://dx.doi.org/10.2139/ssrn.3281765

Rothwell, Janet; Bandar, Zuhair; O’Shea, James; McLean, David (2006): Silent Talker. A new computer-based system for the analysis of facial cues to deception. In: Applied Cognitive Psychology 20 (6), pp. 757–777. https://doi.org/10.1002/acp.1204

Sánchez-Monedero, Javier; Dencik, Lina (2022): The politics of deceptive borders. ‘Biomarkers of deceit’ and the case of iBorderCtrl. In: Information, Communication & Society 25 (3), pp. 413–430. https://doi.org/10.1080/1369118X.2020.1792530

Selbst, Andrew (2017): Disparate impact in big data policing. In: Georgia Law Review 52 (1), pp. 109–195. http://dx.doi.org/10.2139/ssrn.2819182

Starke, Christoph; Baleis, Janine; Keller, Birte; Marcinkowski, Frank (2022): Fairness perceptions of algorithmic decision-making. A systematic review of the empirical literature. In: Big Data & Society 9 (2), pp. 1–16. https://doi.org/10.1177/20539517221115189

Varghese, Ashwini; Cherian, Jacob; Kizhakkethottam, Jubilant (2015): Overview on emotion recognition system. In: Proceedings of the 2015 International Conference on Soft-Computing and Networks Security (ICSNS). Coimbatore: IEEE Xplore, pp. 1–5. https://doi.org/10.1109/ICSNS.2015.7292443

Weydner-Volkmann, Sebastian (2021): Technikvertrauen. In: TATuP – Journal for Technology Assessment in Theory and Practice 30 (2), pp. 53–59. https://doi.org/10.14512/tatup.30.2.53

Whittaker, Meredith et al. (2018): AI now report 2018. New York, NY: AI Now Institute. Available online at https://ec.europa.eu/futurium/en/system/files/ged/ai_now_2018_report.pdf, last accessed on 26. 01. 2024.

Zhang, Liangfei; Arandjelović, Ognjen (2021): Review of automatic microexpression recognition in the past decade. In: Machine Learning and Knowledge Extraction 3 (2), pp. 414–434. https://doi.org/10.3390/make3020021

Authors

Dr. Daniel Minkin

is substitute professor at Bergische University Wuppertal and research associate at the High-Performance Computing Center Stuttgart, Germany. His research and areas of expertise include philosophy of artificial intelligence, epistemology, and philosophy of the social sciences.

Dr. Lou Therese Brandner

is a research associate at the International Center for Ethics in Sciences and Humanities (IZEW) at the University of Tübingen. She received her PhD in Sociology from La Sapienza University in Rome. Her research focuses on AI and data ethics, digital capitalism, and spatial issues.