RESEARCH ARTICLE
Samantha Werens*, 1, Jörg von Garrel1 https://orcid.org/0000-0002-3617-1798
* Corresponding author: samantha.werens@h-da.de
1 Faculty of Social Sciences, Darmstadt University of Applied Sciences, Darmstadt, DE
Abstract • The use of artificial intelligence (AI) as an innovation driver is increasingly gaining importance among small and medium-sized manufacturing enterprises. In order to enable a successful AI implementation, both the business requirements and the needs of human resources must be considered. One construct that brings these dimensions together is the concept of work ability. So far, there is little scientific evidence addressing work ability in the context of AI implementation. Therefore, this article aims to create a multidimensional framework using the results of a qualitative study on employee-friendly implementation of AI-based systems. The framework combines central aspects (implementation stage, AI-autonomy level, and work ability) and helps to identify suitable recommendations for companies to increase acceptance and trust in the implementation process. Based on the developed framework, a first version of a socio-technical AI support tool has been created.
Zusammenfassung • Der Einsatz von Künstlicher Intelligenz (KI) als Innovationstreiber gewinnt in kleinen und mittelständischen produzierenden Unternehmen zunehmend an Bedeutung. Um eine erfolgreiche KI-Implementierung zu ermöglichen, müssen sowohl die unternehmerischen Anforderungen als auch die Bedürfnisse der Mitarbeitenden berücksichtigt werden. Ein Konstrukt, das diese Dimensionen zusammenführt, ist das Konzept der Arbeitsfähigkeit. Bislang liegen nur wenige wissenschaftliche Erkenntnisse vor, die sich mit der Arbeitsfähigkeit im Kontext der KI-Implementierung befassen. Daher soll in diesem Beitrag aufbauend auf den Ergebnissen einer qualitativen Studie zur arbeitnehmerfreundlichen Implementierung KI-basierter Systeme ein mehrdimensionales Konstrukt entwickelt werden. Dieses Konstrukt kombiniert zentrale Aspekte (Implementierungsphase, KI-Autonomiegrad und Arbeitsfähigkeit) und hilft dabei, geeignete Empfehlungen für Unternehmen zu identifizieren, um die Akzeptanz und das Vertrauen in den Implementierungsprozess zu erhöhen. Auf Basis des entwickelten Konstruktes wurde eine erste Version eines sozio-technischen KI-Unterstützungstools erstellt.
Artificial intelligence (AI) is becoming a key technology in a digitized working environment (Hartmann et al. 2017). The introduction of AI systems can promote both the efficiency (in the sense of process improvements) and the effectiveness (in the sense of innovative services) of manufacturing companies. However, it would be naive of companies to assume, that the mere introduction of an AI system will act in a self-perpetuating way and that success can be generated by this technology alone (Urbach et al. 2021).
Instead of focusing entirely on the technological component, it is important to involve employees and their needs in the implementation of technological innovations (Stowasser et al. 2020). Indeed, an insufficient acceptance and trust in AI technologies among employees, can lead to an inhibition regarding the application of AI (Lundborg and Märkel 2019). The implementation of AI systems can affect the employees’ work ability, as the introduction of this new technology may be followed by a change in previous work activities or values (Jung and von Garrel 2021).
Thus, it becomes apparent that a socio-technical perspective in this context is necessary to provide equal weight to technological requirements and human needs (Sartori and Theodorou 2022). This should be considered in order to provide an employee-friendly and consequently successful approach when implementing AI systems at work. The term of an employee-friendly implementation focuses especially on the process of building trust and acceptance in the AI system by the way it is introduced (Jung and von Garrel 2021). Before implementing AI at the workplace, the way those systems are developed can influence the perception of trust. AI developers are able to increase the perceived level of trust in AI by creating transparent AI systems, e.g., in terms of explainability in the decision-making process or the transparency when it comes to data policies. There are various AI principles for creating trustworthy AI, but it becomes apparent that a holistic approach for implementation, including main stakeholders, is required (Rossi 2019). Therefore, not only the way the AI is designed remains important but also the way companies introduce new AI systems at work. They have to be aware of the necessity of considering technical perspectives and human based factors of employees as well in order to increase the opportunity to achieve a higher level of acceptance and trust in their AI operation.
Against this background, companies need a ’socio-technical support aid’ that precisely considers both, the specifics of artificial intelligence and the human based factors in the sense of work ability. The aim of this research is to build a framework for creating such a socio-technical perspective and to show how this framework can be transformed into a practical support tool for manufacturing small and medium sized enterprises (SMEs).
Artificial intelligence is based on mathematical-statistical models, which are described as algorithms. These algorithms are able to identify alternative solutions, gain new insights, optimize processes and support decisions in an independent way (Beins et al. 2017). Quite often, algorithms are able to surpass the cognitive ability of people in decision-making and problem-solving situations and therefore they have the ability to be faster and more efficient than human beings. In some areas (e.g., judgment, reasoning, and response), AI systems appear to be superior to humans. Those systems have advantages over humans in work activities and content that (1) are characterized by a high degree of dynamism, uncertainty, and complexity, or (2) require a high degree of objectivity, or (3) in which the variety of parameters and data, relevant to decision-making, would exceed human processing capacity. In turn, humans seem to exhibit particular strengths in tasks that require strong intuition, creativity, flexibility, empathy, and tacit knowledge (Keding 2021).
It is important to reveal the limitations of the particular artificial intelligence system, reduce uncertainty, and gain a better understanding of its function.
The level of technological or human skills required for a particular scenario at work can differ depending on the AI-context. Therefore, it is important to gain knowledge about how the interaction between human and AI is supposed to look like. Depending on the abilities of the chosen AI, the requirements for human skills (e.g., decision-making ability) may change. The (future) interaction between AI and humans in the industrial production can be illustrated in particular by the concept of autonomy levels. The categorization of various AI-systems shows how the ability to make decisions can change when choosing a particular AI (Ahlborn et al., p. 12). According to the concept of autonomy levels, AI-related degrees of capabilities can be categorized into six stages. Thereby the stages 0–2 can be defined as the lowest autonomy level, 3–4 as semi-autonomous and stage 5 as fully autonomous. In the first level (stages 0–2), humans act as largely independent decision-makers and have a high control over the course of processes. In the second level, beginning with stage 3, the AI system engages more strongly in the work process and the human can hand over certain activities to the AI. An actual interaction between humans and machines is more evident here. The third and highest level (stage 5) is considered to be the most radical autonomy level, as the fully autonomous system acts completely independently and a human can be absent when executing this work process (Ahlborn et al., p. 14).
Considering the various possible forms of interaction between humans and machines, e.g., with respect to the autonomy levels, the question is how employees can successfully manage their work activities in those variously aligned workspaces using AI systems. For an adequate response, it is essential to obtain a thorough understanding of all resources required to manage the work tasks effectively. For this purpose, the term of work ability is used. Work ability can be understood as the “sum of all factors that enable a person in a particular work situation to successfully cope with the work tasks assigned to him or her” (Prümper and Richenhagen 2011, p. 136).
Regarding work ability, the employee’s health condition is one of the most important factors. Indeed, poor health conditions – both mentally and physically – can decrease the ability to successfully cope with one’s work tasks. Therefore, many measurement tools addressing work ability focus on data related to health, like the Copenhagen Psychosocial Questionnaire, the Salutogenetic Subjective Work Analysis, or the Short Questionnaire for the Analysis of Work (Werens and von Garrel 2022).
Despite the consideration of health and ergonomics, the understanding of work ability has changed. The importance of understanding and measuring the interaction between the resources of employees and the company increased. Because of changes in the working life (e.g., due to digitalization, new technology and globalization), researchers need to develop the concept of work ability considering all new changes (Ilmarinen 2019).
Back in the 1980s the Finnish Institute of Occupational Health played an important role in the development of a tool to measure work ability: the Work Ability Index (WAI), which is a questionnaire containing various aspects of work ability (El Fassi et al. 2013). Due to the continuous development of the tool, it is still considered as reliable and effective for measuring work ability and creating suitable interventions. The WAI is based on the concept of work ability by Juhani Ilmarinen, which is also called ‘work ability house’ (Illmarinen 2019). He introduces in his model the components of work ability as floors, which in sum depict the “house of work ability” (Giesert et al. 2017, p. 27). The particular components of his model are shown in Figure 1. In general, this approach aims to create the best possible fit between employees’ individual resources, such as health (1) and competences (2), and the company’s requirements in terms of work content (3) and organization (4) based on shared values (5). Further, it can be seen as a suitable approach connecting the perspectives of both employees and even technological systems (Ilmarinen et al. 2002).
Fig. 1: The house of work ability. Source: authors’ own compilation based on Ilmarinen 2019, p. 2
In total, this approach introduces to this analysis a holistic view of the various components of work ability. Moreover, the WAI is not only a valid tool but also easy to use when considering to conduct a quantitative study. In order to identify whether and how the use of AI-work systems affects the employee’s work ability, it remains necessary to identify an approach representing the implementation of AI as a progressive process.
To illustrate a possible employee-friendly implementation process, one perspective – the employee’s work ability – has been already covered. However, an additional approach is necessary that not only represents the innovations’ implementation process itself, but also actively involves the employees who are affected by the AI-based change. The concept of Human-Technology-Organization (HTO) offers an approach to understand and develop structures at work by taking into account the human, technological and organizational point of view. Nevertheless, the concept remains quite static and focuses on the relation between those aspects mainly (Karltun et al. 2021).
A dynamic model that presents the introduction of an innovation as a process while taking into account those three aspects, is the innovation adoption process based on Rogers. His model remains helpful because it takes into account the potential user of the innovation and focuses on how the implementation should take place in order to increase the likelihood of adoption. Consequently, the model puts an emphasis on building acceptance and trust (Rogers 1983).
Some theoretical approaches exist, e.g., TAM (Technology Acceptance Model) and the further developed versions (TAM 2 and TAM 3), that do list various factors to enhance building acceptance and trust in a certain technology. However, some of these models reference Rogers, who already covers many of these acceptance factors in his model and integrates them into the adoption process (Jung and von Garrel 2021). Due to Roger’s combination of acceptance-promoting factors and the process-like structure of his model, the innovation adoption process is taken into account for the further course of the study, which will be explained below:
The innovation adoption process is structured into five stages. The first stage (knowledge) deals with creating awareness of an innovative technology, which is going to be implemented. The potential user gains some understanding of the technology’s function but doesn’t receive detailed information yet. However, the second stage (persuasion) is considered as important, because at this point more detailed information is known to the potential user, which helps forming an individual attitude towards the innovation. In this stage, an evaluation of the received information takes place, which determines whether acceptance or rejection of using the innovation follows (Rogers 1983, pp. 164). Based on the evaluation of the received information (e.g., perceived advantage or the level of complexity), the individual decides in the following stage (decision) whether he or she is willing to adopt and use the innovation. Nevertheless, in case of rejection, the dynamics of the model allows a later adoption as well, for example by trying to readjust to the individual needs (Karnowski and Kümpel 2016). In case of adoption, the first application of the innovation takes place in the fourth stage (implementation). The fifth stage (“confirmation”) occurs, when the application was found to be useful. In this stage the individual seeks information that supports the previous decision to adopt the innovation. Confirmation increases through the consistency of further application with the received information of the innovation.
The theoretical and empirical background points out the relevance of the particular approaches when trying to create a multidimensional framework. A qualitative study has been conducted to combine the approaches above and to explore the extent in which the approaches interact with each other.
In total 15 people participated in the study: five AI experts, five managers and five employees. Since the research project focuses on the implementation of AI in SMEs operating in the field of industry, employees and managers were interviewed with a background in manufacturing industry. Those three different groups were chosen to get a wide range of expertise and to cover different perspectives. Especially the employees’ perspective allows a deeper understanding of their actual needs and attitudes towards AI-based changes. We also expect to gain an organizational perspective through the manager’s point of view. With the additional expertise of AI-experts, the sample provides a comprehensive view.
A semi-structured interview guide has been created. The interviews were held online and lasted around 35–90 minutes each. Through the explorative nature of the interviews, we were able to gain new findings, which allowed us to identify unknown patterns and relationships. The interview guide includes the adaption process according to Rogers and takes into account the various dimensions of work ability (Figure 1). The chronology of the interview guide was adjusted to the particular stages of the adoption process, so that the participants were able to share their perspective from the very beginning (awareness) until the last stage (trust). In order to address the requirements and specifics of AI mentioned above, the interviews were conducted in consideration of various AI-based work scenarios. The interviewees were introduced to three short video sequences of AI systems in the field of manufacturing. Each of these AI systems relates to a different AI-autonomy level: moderate scenario (AI-autonomy level 1), semi-autonomous scenario (AI-autonomy level 2) and fully autonomous scenario (AI-autonomy level 3). The moderate scenario is characterized by a low-autonomous visual assistance system. In this scenario, it is mainly the human being who has the power to make decisions and take action. The semi-autonomous scenario is characterized by a driverless transport system that performs intralogistic activities autonomously. The third scenario represents an autonomously acting AI-based robotic solution in the field of logistics, which is classified in the category of radical scenarios. The participants were asked to answer questions regarding the components of work ability by considering the different AI scenarios for each implementation stage. The intention of considering different scenarios was to find out, whether and to what extent the different AI-autonomy levels are able to influence particular components of the employees’ work ability. Finally, the interviews were transcribed und analyzed with the help of a qualitative content analysis.
One initial finding of the study is that the process of building acceptance and trust in AI systems corresponds to the innovation adoption process by Rogers. Based on the findings from the interviews a research model has been developed which shows how the implementation-process of AI systems can be designed when considering employee-friendly factors (Figure 2).
Fig. 2: Research model for the acceptance and trust building process when implementing AI systems. Source: authors’ own compilation based on Jung and von Garrel 2021, p. 42
The model indicates that transparent communication and openness toward employees by the management (as interpersonal factors) as well as a corporate culture and innovation capability (as structural factors) are key success factors that are intended to build up “confident positive expectations” (Oswald 2010, p. 63) toward the use of AI-based systems. Transparent communication needs to take place from the very beginning when creating awareness for the AI implementation.
Moreover, for increasing a positive opinion towards the AI system, it is helpful to highlight the AI’s relative advantages (e.g., perceived usefulness) while expecting a low level of risk. Also, great importance is attached to the perceived ease of use, since employees prefer a high level of comfortable and intuitive use of the system, which facilitates the implementation process. Due to the different variations of AI systems mentioned above, it is important to reveal the limitations of the particular AI system, reduce uncertainty, and gain a better understanding of its function.
Providing employees with such as well as further information regarding the function and changes due to the AI system helps them to decide on whether a first application of the system is going to be accepted or rejected. The usage of the system is crucial to reduce uncertainties and to evaluate if the presumed benefits or risks occur in the expected way. A user experience which is perceived as positive can lead to more acceptance and trust in the AI system. Therefore, it is important to ensure that the process runs properly. Unpredicted errors can lead to the destruction of trust, which needs to be avoided in order to ensure an employee-friendly implementation. A positive experience with the system takes place, when, e.g., employees notice an increase of productivity or release in their work routines as a result by using the AI system. If the positive user experience is repeated and the use corresponds to the employees’ expectation or even exceeds it in a positive way, confirmation and trust is built up. Nevertheless, special attention must be drawn to the security of employment. Giving employees the guarantee of keeping their jobs despite the AI introduction is considered as one of the most important factors to support the acceptance and trust building process. Of course, this only applies if this guarantee can actually be given. In any case, honest and transparent communication in this regard is always essential and must be taken seriously (Jung and von Garrel 2021). In summary, the research model (Figure 2) addressed key points of an employee-friendly implementation and has shown that central dimensions are linked together.
The different levels of artificial intelligence autonomy can have an impact on the employees’ work ability and on the speed and intensity of the acceptance and trust building journey.
Based on those findings a multidimensional framework has been developed which helps to gain a better understanding on the dynamic between all three dimensions. Consequently, the multidimensional framework (Figure 3) illustrates the connection between the following dimensions: (1) AI-autonomy level, (2) implementation stages and (3) components of work ability.
By creating this framework, all possible combinations and points of intersection between the dimensions were made visible. This remains necessary when considering, that the various dimensions do not operate separately. Instead, the dimensions have an influence on each other, which can vary depending on the dimensions’ specification. For example: the way an AI system can influence the employees’ health at work can in fact depend on the chosen AI-autonomy level and current implementation stage. While the fear of losing one’s job can increase by the introduction of a fully autonomous AI system, this fear might be less likely to occur when introducing AI systems with a low autonomy level. This means that the effect on mental health can also vary with the choice of the AI system, which consequently may have an effect on how much effort needs to be dedicated to a particular implementation stage.
Fig. 3: Multidimensional framework for employee-friendly implementation of AI systems. Source: authors’ own compilation
Taking the dynamic between all dimensions into account allows to generate individualized recommendations for every implementation stage based on the selected AI-autonomy level with respect to the employees’ work ability.
The implementation of AI-based work systems can promote the competitiveness of companies by structuring work processes more efficiently. However, this advantage cannot be achieved solely through the introduction of AI, but requires the consideration of employees in order to generate the best possible fit between technical and social requirements. Thereby, the acceptance and trust of employees in the AI system plays a crucial role for a successful application. The research model based on Jung and von Garrel (2021) illustrates how the process of trust and acceptance formation can look like when implementing AI systems (Figure 2). They showed that the process itself does not depend or differ due to various AI systems. However, the qualitative study was able to show that the different levels of AI-autonomy can in fact have an impact on the employees’ work ability and on the speed and intensity of the acceptance and trust building journey. Consequently, a multidimensional framework (Figure 3) has been developed. It covers all central dimensions (AI-autonomy level, implementation stages and components of work ability) which need to be considered when companies want to introduce AI by taking into account their employees. Due to the consideration of all dimensions the framework offers a structure which helps to reduce complexity and provides companies with detailed recommendations depending on their individual situation.
A first prototype of a socio-technical tool was created in the form of a digital handbook. The tool contains detailed recommendations for all points of intersection of the dimensions. The tool aims to enable successful implementation of an AI system with a focus on supporting work ability to increase acceptance and trust in the AI system. The recommendations enable companies to empathize with employees and understand the process and the importance of acceptance and trust building. Potential applicants of the tool are at management or executive levels, and are interested in successfully introducing the AI system in their company while taking employees into account. Moreover, the tool contains a self-evaluation part, some important explanations on the subject definition and information for additional literature. In a further step, the tool will be tested by manufacturing companies with focus on its practicability and suitability. The evaluations are aimed at identifying optimization potential and conducting adjustments. Since the current version is available in German language only, a translation into English might be considered after successful evaluation and application.
Ultimately, the novelty of the study results lies in the development of a framework that does not take a one-sided view of AI, but also takes into account the different types of AI systems. This remains important when considering, that the choice of AI-autonomy level does have an influence on how employees’ work ability can change. By using Ilmarinen’s ‘work ability house’, the study shows that work ability is not only limited to factors like health and ergonomics. Nevertheless, for the further improvement of the tool an enhancement of Roger’s innovation adoption process will have to be explored. Although his approach provides a valuable concept to represent the implementation of AI as a process, it indicates some aspects that could be improved in further research. Rogers seems to attribute more authority to the management when it comes to process-related decisions or when choosing a certain type of AI-system. However, recent findings show that participation and co-creation of employees in the change process (e.g., in the process of designing an AI), can be an essential requirement for the development of trust and acceptance (Knappertsbusch and Gondlach 2021). In the era of Industry 4.0, it is becoming increasingly important to link technical resources with the knowledge and experience of employees (Deuse et al. 2018). The further development of the tool aims to integrate those aspects and to integrate new approaches, in which the AI-based change is also characterized by the participation of affected stakeholders.
Funding • The project upon which this research article is based was funded by the German Federal Ministry of Education and Research under grant number 02L19C157. The responsibility for the content of this publication lies with the authors.
Competing interests • The authors declare no competing interests.
Ahlborn, Klaus et al. (2019): Technologieszenario “Künstliche Intelligenz in der Industrie 4.0”. Plattform Industrie 4.0 Working Paper. Berlin: Bundesministerium für Wirtschaft und Energie. Available online at https://www.plattform-i40.de/IP/Redaktion/DE/Downloads/Publikation/KI-industrie-40.pdf?__blob=publicationFile&v=10, last accessed on 03. 05. 2023.
Beins et al. (2017): Künstliche Intelligenz. Wirtschaftliche Bedeutung, gesellschaftliche Herausforderungen, menschliche Verantwortung. Berlin: Bitkom e. V. Available online at https://www.dfki.de/fileadmin/user_upload/import/9744_171012-KI-Gipfelpapier-online.pdf, last accessed on 03. 05. 2023.
Deuse, Joche; Kirsten Weisner; Busch, Felix; Achenbach, Marlies (2018): Gestaltung sozio-technischer Arbeitssysteme für Industrie 4.0. In: Hartmut Hirsch-Kreinsen, Peter Ittermann and Jonathan Niehaus (eds.): Digitalisierung industrieller Arbeit. Die Vision Industrie 4.0 und ihre sozialen Herausforderungen. Baden-Baden: Nomos, pp. 195–214. https://doi.org/10.5771/9783845283340
El Fassi, Mehdi; Bocquet, Valery; Majery, Nicole; Lair, Marie Lise; Couffignal, Sophie; Mairiaux, Philippe (2013): Work ability assessment in a worker population. Comparison and determinants of work ability index and work ability score. In: BMC public health 13 (305), pp. 1–10. https://doi.org/10.1186/1471-2458-13-305
Giesert, Marianne; Reuter, Tobias; Liebrich, Anja (2017): Wege zu einem erfolgreichen Arbeitsfähigkeitsmanagement im Wandel der Zeit. In: Marianne Giesert, Tobias Reuter and Anja Liebrich (eds.): Arbeitsfähigkeit 4.0. Eine gute Balance im Dialog gestalten. Hamburg: VSA-Verlag, pp. 16–31.
Hartmann, Ernst; Hornbostel, Lorenz; Thielicke, Robert; Tillack, Désirée; Wittpahl, Volker (2017): Wie sieht die Zukunft der Arbeit aus? Ergebnisbericht zur Umfrage “Künstliche Intelligenz und die Zukunft der Arbeit”. Available online at https://www.iit-berlin.de/iit-docs/69750d8ab20442ff9452b46fd84dec4f_Ergebnisbericht_Umfrage_Zukunft_der_Arbeit.pdf, last accessed on 03. 05. 2023.
Ilmarinen, Juhani (2019): From work ability research to implementation. In: International Journal of Environmental Research and Public Health 16 (16), p. 2882. https://doi.org/10.3390/ijerph16162882
Ilmarinen, Juhani; Tempel, Jürgen; Giesert, Marianne; Schartau, Harald (2002): Arbeitsfähigkeit 2010. Was können wir tun, damit Sie gesund bleiben? Hamburg: VSA.
Jung, Maria; von Garrel, Jörg (2021): Mitarbeiterfreundliche Implementierung von KI-Systemen in Bezug auf Akzeptanz und Vertrauen. In: TATuP – Journal for Technology Assessment in Theory and Practice 30 (3), pp. 37–43. https://doi.org/10.14512/tatup.30.3.37
Karltun, Johan; Karltun, Anette; Berglund, Martina (2021): Activity. The core of human-technology-organization. In: Nancy Black, Patrick Neumann and Ian Noy (eds.): Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021). Volume I: Systems and Macroergonomics. Cham: Springer, pp. 704–711. https://doi.org/10.1007/978-3-030-74602-5_96
Karnowski, Veronika; Kümpel, Anna Sophie (2016): Diffusion of innovations. In: Matthias Potthoff (ed.): Schlüsselwerke der Medienwirkungsforschung. Wiesbaden: Springer Fachmedien, pp. 97–107. https://doi.org/10.1007/978-3-658-09923-7_9
Keding, Christoph (2021): Understanding the interplay of artificial intelligence and strategic management. Four decades of research in review. In: Management Review Quarterly 71 (1), pp. 91–134. https://doi.org/10.1007/s11301-020-00181-x
Knappertsbusch, Inka; Gondlach, Kai (eds.) (2021): Arbeitswelt und KI 2030. Herausforderungen und Strategien für die Arbeit von morgen. Wiesbaden: Springer Gabler. https://doi.org/10.1007/978-3-658-35779-5
Lundborg, Martin; Märkel, Christian (2019): Künstliche Intelligenz im Mittelstand – Relevanz, Anwendungen, Transfer. Eine Erhebung der Mittelstand-Digital Begleitforschung. Bad Honnef: WIK GmbH.
Oswald, Margit (2010): Vertrauen in Organisationen. In: Martin Schweer (ed.): Vertrauensforschung 2010. A state of the art. Frankfurt a. M.: Peter Lang, pp. 63–85.
Prümper, Jochen; Gottfried Richenhagen (2011): Von der Arbeitsunfähigkeit zum Haus der Arbeitsfähigkeit. Der Work Ability Index und seine Anwendung. In: Brigitte Seyfried (ed.): Ältere Beschäftigte – zu jung, um alt zu sein. Konzepte, Forschungsergebnisse, Instrumente. Bielefeld: Bertelsmann, pp. 135–146.
Rogers, Everett (1983): Diffusion of innovations. New York, NY: The Free Press.
Rossi, Francesca (2019): Building trust in artificial intelligence. In: Journal of international affairs 72 (1), pp. 127–134. Available online at https://jia.sipa.columbia.edu/building-trust-artificial-intelligence, last accessed on 03. 05. 2023.
Sartori, Laura; Theodorou, Andreas (2022): A sociotechnical perspective for the future of AI. Narratives, inequalities, and human control. In: Ethics and Information Technology 24 (1), pp. 1–11. https://doi.org/10.1007/s10676-022-09624-3
Stowasser, Sascha et al. (2020): Einführung von KI-Systemen in Unternehmen. Gestaltungsansätze für das Change-Management (Whitepaper). München: Lernende Systeme – Die Plattform für Künstliche Intelligenz. Available online at https://www.plattform-lernende-systeme.de/files/Downloads/Publikationen/AG2_Whitepaper_Change_Management.pdf, last accessed on 03. 05. 2023.
Urbach, Nils et al. (2021): KI-basierte Services intelligent gestalten. Einführung des KI-Service-Canvas. Bayreuth: Fraunhofer-Institut für Angewandte Informationstechnik FIT. Available online at https://www.fim-rc.de/Paperbibliothek/Veroeffentlicht/1313/wi-1313.pdf, last accessed on 03. 05. 2023.
Werens, Samantha; von Garrel, Jörg (2022): Durchführung einer Analyse zu den Auswirkungen von KI-Arbeitssystemen auf die Arbeitsfähigkeit von Mitarbeiter:innen. In: Gesellschaft für Arbeitswissenschaften e. V. (ed.): Technologie und Bildung in hybriden Arbeitswelten. Dokumentation des 68. Arbeitswissenschaftlichen Kongresses. Sankt Augustin: GfA-Press, pp. 1–8.