RESEARCH ARTICLE

Perception of the risks inherent in new AI technologies

Petr Machleidt*, 1, Jitka Mráčková2, Karel Mráček3

* Corresponding author: machleidt@flu.cas.cz

1 Institute of Philosophy, Czech Academy of Sciences, Prague, CZ

2 Czech University of Life Sciences Prague, Prague, CZ

3 Czech Technical University in Prague, Prague, CZ

Abstract  Artificial intelligence (AI) has undergone rapid development and is becoming one of the major social issues. The advent of generative AI is not only associated with potential benefits but also poses a number of risks, such as increasing malicious misuse. This brings the issue of regulating new AI technologies into focus. In the search for solutions, the article revisits the problem of applying the precautionary principle. We critically evaluate regulatory approaches, particularly with regard to maintaining an innovation-friendly environment. A prudent approach to new AI technologies not only requires regulatory measures but also places new demands on the education system. We also discuss the regulation of AI from the perspective of Czech legislation in a broader international context. Current challenges in the field of AI competence are highlighted. To make the issue more tangible, we provide specific examples from the Czech Republic.

Wahrnehmung der mit neuen KI-Technologien verbundenen Risiken

Zusammenfassung  Künstliche Intelligenz (KI) entwickelt sich rasant weiter und wird mehr und mehr zu einem zentralen gesellschaftlichen Thema. Das Aufkommen der generativen KI ist nicht nur mit potenziellen Vorteilen verbunden, sondern birgt auch Risiken, wie etwa einen zunehmenden böswilligen Missbrauch. Damit rückt die Frage der Regulierung neuer KI-Technologien in den Fokus. Auf der Suche nach Lösungsansätzen für diese Herausforderungen geht der Artikel erneut auf die Problematik der Anwendung des Vorsorgeprinzips ein. Regulierungsansätze werden vor allem mit Blick auf die Aufrechterhaltung eines innovationsfreundlichen Umfelds kritisch bewertet. Ein umsichtiger Umgang mit neuen KI-Technologien erfordert nicht nur regulatorische Maßnahmen, sondern stellt auch neue Anforderungen an das Bildungssystem. Darüber hinaus wird im Artikel die Regulierung von KI aus Sicht der tschechischen Gesetzgebung im breiteren internationalen Rahmen diskutiert. Dabei werden auch aktuelle Herausforderungen im Bereich der KI-Kompetenz beleuchtet. Um die Problematik greifbarer zu machen, werden konkrete Beispiele aus der Tschechischen Republik angeführt.

Keywords  artificial intelligence, regulation, ethics, risks, AI Act, education

This article is part of the Special topic “Malevolent creativity and civil security: The ambivalence of emergent technologies,” edited by A. Gazos, O. Madeira, G. Plattner, T. Röller, and C. Büscher. https://doi.org/10.14512/tatup.33.2.08

© 2024 by the authors; licensee oekom. This Open Access article is licensed under a Creative Commons Attribution 4.0 International License (CC BY).

TATuP 33/2 (2024), S. 42–48, https://doi.org/10.14512/tatup.33.2.42

Received: 04. 01. 2024; revised version accepted: 24. 04. 2024; published online: 28. 06. 2024 (peer review)

Introduction

The new challenge for technology assessment is the rapid development of artificial intelligence (AI), which is unprecedented in its history to date. AI technologies penetrate various fields of human activity, open tremendous opportunities, but also risks in the form of harmful use or misuse. Consequently, AI involves a variety of security risks, further compounded by its current rapid development.

Malevolent use of new AI technologies is a problem perceived in the Czech Republic as well. How to solve this new technological and social challenge? What can we do about it? The application of the precautionary principle is offered as a framework for regulatory measures to achieve trustworthy AI. However, a precautionary approach must also be supported by educational efforts in AI literacy. These approaches should be critically evaluated with a view to preserve a pro-innovation environment.

Views on AI risks

In debates across the media in the Czech Republic, the biggest civil-security risks of AI associated with its use for malevolent activities are perceived in the area of personality protection and its privacy and the space created for the spread of disinformation, consumer protection and in the healthcare sector (sensitive data).

When updating the national strategy for artificial intelligence, a public consultation was conducted to source stakeholders’ and the wider public’s views of the opportunities and threats presented by the contemporary dynamic advancement of AI (MIToCR 2023). As regards the perceived threats, most respondents (about 62 %) mentioned manipulation and proliferation of fake information. Other AI-related threats, by frequency of occurrence, included cyber-attacks, disappearance of human agency in the decision-making process, technology dependence, invasion of privacy, digital divide, lack of regulation and loss of jobs. In general, in the respondents’ opinion, the most important issues include education, the use of AI in practice and the creation of a legal framework (which would lay down a clear set of rules and ensure the protection of citizens’ rights).

While, previously, Czech clients were to some extent protected by the complexity of the Czech language, this is no longer the case with the arrival of AI’s new generation.

Certain concerns were also voiced in earlier research focusing on AI in financial services with implications for the Czech population (Ipsos 2023). Three quarters of all the respondents believe that the use of AI in financial services should be curbed by regulation (in the form of principles and rules). In terms of privacy and security of financial data that involve the use of AI, more than one third of the respondents feel safe while one quarter of them do not. However, it is precisely financial services that give rise to the interesting phenomenon of AI technology ambivalence where the threat of malicious use of AI (in the form of phishing and other cyber fraud) is offset by AI’s potential for rapid detection and even elimination of such incidents. One statistic to be mentioned in this respect is the rate of confidence in AI’s role in fraud detection and prevention (36 % of respondents).

The advent of new generative AI (GenAI) technologies in the Czech Republic in the months of 2023 that followed manifested itself in particular in a surge in digital attacks and fraud attempts targeting clients of banks and other financial sector institutions as a chronically weak link in financial security. The use of new GenAI technologies makes it possible to create fraudulent emails, fake websites and deepfake videos, fake photos and voices faster, in greater quantities and at higher quality levels than has been possible so far. While, previously, Czech clients were to some extent protected by the complexity of the Czech language, this is no longer the case with the arrival of AI’s new generation. In finance, the main concern is the misuse of voice in soliciting non-standard operations. Financial institutions frequently use chatbots to communicate with their clients, so the latter may often fail to recognise that they are being scammed. In the Czech Republic, we have recently seen a surge in a number of deep fake videos – a traditional tool in creating disinformation campaigns with a view to swaying public opinion for political gains – used for financial fraud purposes. The videos usually feature a celebrity promoting fake investment opportunities.

According to statistics, cybercrime in the Czech Republic is steadily increasing and in 2023 it accounted for approx. 11 % of the total recorded crime (Moravčík and Vinčálek 2024). It is especially more sophisticated phishing practices and recruitment of money launderers on social media and other online platforms that have become more prevalent following the emergence of GenAI. According to the National Cyber and Information Security Agency (NÚKIB), the number of cyber incidents involving critical infrastructure doubled year-on-year in 2023, with security experts linking the trend mainly to the arrival of more sophisticated AI technologies (NÚKIB 2024).

According to the AI Trends survey (Randstad HR Solutions s.r.o. 2023), two thirds of companies assume AI requires a legal framework, 28 % of them believe that the regulation should be very strict. The corporate sector is generally more reticent to use GenAI compared to the general population. Some of the concerns voiced include opening corporate data to a global AI service for the latter to learn from, and the resulting threat of corporate know-how being simply handed over to just about anybody. At the same time, there is a growing awareness of the potential of AI in data protection and a range of its applications in cyber protection. Overall, the situation can be described as moderate techno-optimism coupled with a certain amount of caution.

The analyses and investigations conducted show a consensus on the need for adequate legal regulation and education in the development and use of AI.

Background of AI regulation

The rapidly evolving AI and the associated and increasingly perceived risk of its malevolent use call for attention to the issue of regulating novel technologies. As a result, the way the precautionary principle – no matter how ambivalently accepted – is applied needs to be revisited. The precautionary principle has its proponents, but it is also subject to criticism. The proponents generally view the principle as an important concept to protecting the environment and human health (Randall 2011). In particular, they argue that the application of the principle contributes to the development of safe technologies and operates as means of protection against the potential risks of new technology and products. As such, the precautionary principle is closely related to risk management and is usually applied in situations where the potential risks have been demonstrated, yet some uncertainty remains as to their extent or severity.

The critics regard the principle as an obstacle to further innovation, stressing its potential for over-regulation (Sunstein 2002). Excessive caution can indeed lead to disproportionate requirements for evidence demonstrating the safety of new technologies and products, thus stifling the innovation process. The focus then shifts away from the potential benefits of new technologies to the risks inherent in them. When applied, the precautionary principle can lead to indecision due to a lack of sufficient evidence, which can have a negative impact on the economy and society at large. Another aspect criticised in this context is to overload a person with excessive responsibility, which can have a demotivating effect on him. In terms of international trade, there are concerns involving the misuse of the principle for protectionism (Marchant and Mossman 2004).

In the Czech Republic, the precautionary principle is included among the principles governing the environmental protection policy and is defined in the Environmental Act.

The requirement to preserve a pro-innovation environment is prominently featured in current discussions on AI regulation (OECD 2023). Regulation that is too restrictive and allows little flexibility may stifle innovation. Arguably, it is in the highly dynamic field of AI that the objectives of regulation should be primarily set in the form of fundamental rights, protection of human health and safety and the environment, in conjunction with the acceptable rules for AI’s trustworthiness (such as transparency, protection of the personality and of privacy, and combating discrimination); and individuals or firms should be left to come up with their own technical solutions to meet these requirements. Technical standards that specify a certain level of attainment of an objective and various test facilities are also useful in this context. Statutory regulation may stifle innovation if it is overly convoluted and ambiguous, allowing for different interpretations. Regulation that leads to a disproportionate growth in initial costs and an increase in administration may hinder innovation just as well. This situation can be particularly burdensome for SMEs and start-ups. The dynamic development of modern technologies – and this especially applies to AI given its extreme rate of advancement – presents the law with the problematic task of constantly having to catch up with the developments and reduces the legislator’s role to merely adapting the legal framework to those developments. The first version of the AI Act was released in 2021, and it was only after that generative AI models emerged, creating new requirements for legislators to adapt the upcoming regulation accordingly. This also raises the provocative question as to whether we want to regulate something we know little with regard to the non-transparency of the inner workings of the newly developing AI models.

Yet, what clearly legitimises regulation of innovations is the need to prevent potential risks inherent in the use or even misuse of such novelties. Where innovations may create negative externalities, appropriate regulation of such externalities is entirely justified. In this context, the precautionary principle indeed seems to be an appropriate instrument of containing potential hazards that may not even be known (information asymmetry). The instances where the general precautionary principle has so far been applied are related mainly to the environmental segment and to food safety and pharmaceuticals. Germany played a major role in laying the philosophical and ethical foundations of the precautionary principle and in the way it was eventually embedded in policy and law. The European understanding of the principle in EU sources of law is inspired by German environmental law (Kühn 2006, p. 492). In the Czech Republic, the precautionary principle is included among the principles governing the environmental protection policy and is defined in the Environmental Act.

In Czech legal literature, the issue is comprehensively dealt with only from the viewpoint of international commercial law (Grmelová 2022). The EU regards the precautionary principle as a general legal principle applicable to all areas of law.[1] While the AI Act does not explicitly refer to this principle, it does apply a certain precautionary tenet to AI in the form of a risk-based approach. In essence, this reflects the universal recognition of the need to protect valuable goods such as the environment, life and human integrity amidst concerns induced by unregulated technological development on a global scale. GenAI is concerned with co-creation and, by inference, creativity, and it is expected to interact with humans in working and creative environments, but, as the obtained knowledge suggests, it can also be exploited for malevolent creativity. The impacts of positive use or misuse are much more far-reaching and global for this technology, and are associated with a range of uncertainties concerning future developments. Consequently, the requirement for precautionary measures, particularly as regards the safeguarding of moral rights and protection of privacy and intellectual property, is entirely justified. However, the principle should only be applied where necessary and in a proportionate manner where a lack of information and the potential risk persist after an evaluation based on the available scientific knowledge and the technology assessment carried out.

Regulatory measures and actions for trustworthy AI

Potentially different regulatory frameworks are currently applied in the world to ensure trustworthy AI and mitigate the risks associated with the development and deployment of AI systems. In addition to exploring ways to use and adapt current legislation for AI, new regulatory measures and actions to ensure trustworthy AI are coming to the fore in the form of (i) ethical frameworks and principles, (ii) hard law approaches, (iii) promoting international standardisation and international law efforts, and (iv) supporting the creation of controlled environments for regulatory experimentation (OECD 2023).

Some countries (Australia, Singapore, Switzerland, South Korea and others) have released national ethical frameworks for the development and deployment of AI in line with the OECD AI Principles (OECD 2023). The European Commission released its own ethics guidelines for the trustworthy use of AI in April 2019 (EC 2019). As AI penetrates still deeper into everyday use, the discussion on the ethics of AI, respecting the possibilities and limits of the approach, is becoming increasingly important. This, above all, concerns the ethical approaches adopted by firms developing and implementing AI. While ethical principles are not legally enforceable and operate on a voluntary basis, they contribute to the cultivation of the corporate and societal environment.

As regards AI regulation based on legislation, a distinction must be made between efforts at taking a cross-sectoral, ‘horizontal’ approach to such regulation (e.g. in the EU, Canada), and a context-based sectoral or ‘vertical’ approach where legislation focuses on sectors or areas (e.g., the US, UK, China, Israel). It should be borne in mind that specific legislation differs when it comes to the actual definition of AI, risk classification, and the scope and oversight of regulation. The definition of AI itself is a frequent subject of debates considering its importance in terms of the future scope and extent of the regulation.

In contrast to the EU, the legislators in the US are currently far from even considering an approach to AI regulation that would entail comprehensive legislation. Binding federal technology legislation is mostly sector or domain specific. Executive policy is important in that context (The White House 2022; NIST 2023). The Office of the President also drew voluntary commitments from leading U.S. technology companies to manage the risks associated with AI and to assist in the transition to safe, transparent, and trustworthy development of AI technologies. At the same time, AI risk management is linked to the promotion of responsible innovation (The White House 2023).

The UK’s approach to AI regulation also differs significantly from the EU AI Act. It is characterised by devolution and focus on actual risks and harms (and on risk illustration rather than classification); it relies on sectoral regulation and favours voluntary measures and guidance. The regulatory option is a vertical approach with an emphasis on a pro-innovation framework (DSIT 2023). Regulators are advised to favour soft law over mandatory regulation and are left to set out rules specific to each sector.

The AI Act is based on a risk-assessment approach and provides a single horizontal legal framework for AI with a view to providing legal certainty and, avoiding fragmented regulation across EU Member States (EC 2023). The Act uses a fairly broad definition of AI, rating different AI systems on the scale of the risks they pose to society and risks are classified as unacceptable, high, limited and low or minimal. AI systems with an unacceptable level of risk to human safety will be prohibited (see, for instance, use of ‘social scoring’). However, more accurate definitions of the different risk systems are also a challenge, taking into account the difficulty of enforcement, certification and addressing accountability issues. In addition, as AI permeates various fields of human activity, it is becoming a challenge for a number of legal sectors.

The dynamic developments in AI present another challenge for the rather complex EU legislative process (‘the trialogue’). The original text of the AI Act had to respond to major changes (see, for instance, GenAI). The looming dilemma of the rampant development of AI and delays in the legislative process are somewhat reminiscent, in retrospect, of the inadequate regulation problem concerning social networks algorithms (see the so-called Collingridge dilemma).

It would be appropriate to establish risk mitigation obligations for the different actors in the chain and make them liable for fulfilling the obligations, not only for breaching them.

In the Czech Republic, much attention has been paid in recent years to the ethical and legal aspects of AI (AVex 2023). The Czech legal system is influenced by the developments in AI legislation in the EU and international law. In designing downstream AI regulation, more inspiration can be drawn from the precautionary principle in ‘Leitlinien zur Umweltvorsorge’ from 1986, where three sub-objectives are identified (Arndt 2009). The objectives may be put to good use in the process of regulating AI. Going forward, the advancement and use of AI will bring about the need to avert harm to humans, mitigate risk and act prudently. The precautionary principle is contained in the aforementioned Czech Act No. 17/1992 Coll., on the environment (Section 13), according to which the principle may be applied per analogiam in order to close a gap in the law. AI raises a number of outstanding questions that will have to be addressed across the entire Czech law. As for the adopted AI Act, the Czech Republic is now in the process of familiarising itself with the legislation. The oversight authority issue will need to be resolved and the adopted European rules will have to be transposed into national legislation (in particular civil, criminal and administrative law).

In civil law, AI is viewed as a sui generis intangible thing, i.e. an object (as opposed to a subject) of a legal relationship, which removes any doubt as to the possibility of granting legal personality to AI (Zibner 2022). According to the Czech Copyright Act, which is based on the principle of objective authorship, only a natural person can be an author as the originator of a work. AI cannot be an author even in common law countries because it is not a human being. In civil law systems, an author (a human being) retains their moral and economic rights under copyright during their lifetime. Only the right to use the copyrighted work can be transferred (the licensee being a mere user, as opposed to owner, of the work). The same applies to AI, and this may raise some concerns when addressing liability for material and non-material damage. Pending the development of appropriate AI legislation, the relevant cases will have to be resolved by way of analogy. Attention needs to be given in particular to the responsible entities in the development and use/misuse of AI (developers, programmers or, where applicable, employers, distributors, operators or users) with regard to the damage caused to individuals and legal persons, or the State. The rights and duties of these entities remain the subject of some discussion instead of being subjected to an in-depth analysis of the existing legal concepts of liability. Knapp, for example, tackled the issue of civil liability asking whether the subject is responsible for fulfilling a duty or only liable for its non-fulfilment, or a breach a duty. The answer to the question is that “whoever owes a duty is liable at once for the fulfilment of the duty and for a breach thereof“. He went on to say that “they are responsible for the fulfilment of a duty in the sense that from its very onset, the law threatens them with a sanction in case they fail to fulfil the duty, and they are liable for its non-fulfilment in the sense that they will suffer a sanction as consequence of their liability“ (Knapp 1956, both quotations p. 75). This approach connects a continuously prospective and retrospective concept of responsibility and liability, which could lead to the fulfilment of the precautionary principle. A similar approach can be found in German legal theory (Medicus and Lorenz 2008, p. 9). These theoretical concepts could be of use in regulating AI along its entire value chain (from development to use). It would be appropriate to establish risk mitigation obligations for the different actors in the chain and make them liable for fulfilling the obligations, not only for breaching them. And it is precisely such coupling of the prospective and retrospective liability that could be found operative in relation to AI-relevant legal acts. This will require flexible monitoring mechanisms. Statutory regulation must be effective and allow for adequate oversight in order to enhance legal certainty.

Education activities related to AI

A precautionary approach in dealing with new technologies does not necessarily have to rely on regulatory measures; the education system must also play a role in this. The latter is now gaining extra importance as the rapid development of GenAI leads to significant changes in the labour market. While in the past novel technologies have traditionally had the effect of eliminating routine and physically demanding work, the present changes will also affect a wide range of creative professions. This calls for increased competences of the general public to work with AI. AI literacy therefore poses a tremendous challenge to the entire education system, ranging from school education to lifelong learning. AI literacy must include the skills to harness the opportunities and minimise the negative impacts of AI, to understand the possibilities and limits of these technologies. In terms of precautionary measures, particular attention must be devoted to the ways of identifying and interpreting dangerous uses of AI in the form of disinformation, manipulation and deepfakes and of defending against such manipulative techniques. Malevolent AI users are not only using technical innovations in this area; recently they have innovated the way of social communication and manipulation of victims. The risk of the pool of competent persons capable of malevolent AI creativity deepening must also be taken into account. AI literacy can work more effectively as means of protection against malevolent AI use, if combined with media literacy, financial literacy, etc., so that synergies in the competences achieved can be exploited. AI literacy also includes an understanding of the ethical aspects of the development and use of AI. Organisations should foster a culture of ethical use of AI, educating employees on the ethical aspects of AI (such as bias, transparency and privacy).

So far, the Czech Republic has predominantly relied on the bottom-up approach to AI skill acquisition. Noteworthy is the upsurge in various educational, awareness-raising and support activities. In 2023, for instance, the AI Weeks event was held; PRG.AI and Brno.AI, the two associations established with a view to promoting regional research, education, innovation and entrepreneurship, have become fully operational; and media attention to AI has increased manifold. Businesses and various organisations are offering AI-focused educational events for their staff. Due to the ever-increasing threat of deepfake attacks on businesses, Analytics Data Factory, in cooperation with the Czech Association of Artificial Intelligence, has created a free manual entitled ‘DEEPFAKE 2024’: Defence Strategy for Czech Companies (CAUI 2024). To train their staff, the largest banks on the Czech market use internal versions of ChatGPT, which provide access only to public information.

The rapid spread of ChatGPT brought a wave of interest in AI at all levels of the education system. Related initiatives have already been put in place at many schools. However, further developments will not be possible unless ground rules are set, guidance is provided and certain ethical issues are resolved (e.g., ensuring the privacy and security of students’ data). The use of GenAI in higher education raises challenging questions and views. GenAI is highlighted as a source of inspiration (offering new topics, different perspectives, out-of-the-box solutions). It is also well suited for generating various tutorials, manuals, informative texts and statistics, but these may not necessarily reflect the views and attitudes of the author and their personality (Kopecký 2023).

Conclusion

New AI technologies bring a score of opportunities and benefits, but also risks, including the danger of increasing malevolent misuse. Possible approaches to minimizing this perceived threat are mainly in adequate regulatory measures and educational activities. In terms of the regulations, AI development and use have mostly been regulated by ethical rules. It now seems desirable to evaluate these rules in terms of their efficacy and to consider which of them should be given greater force in the form of statutory law. From the viewpoint of AI legislation, we need to address important questions concerning the various roles of law (regulatory and supervisory, and legal certainty roles). There are certain yet unresolved questions as to the rights and duties relating to AI as an intangible thing that pertain to subjects of legal relations (software originators, users, etc.).

A prudent approach to new technologies does not have to consist only of regulatory measures, but the education system must also play its role. We must always keep in mind that “technology development should not be left to random coincidences and to people who use the technology regardless of its side effects and who do not consider the prudence aspect in the application of completely new means“ (Drozenová 2019, p. 65).

A coordinated approach and cooperation between policy makers, state bodies, researchers and businesses (especially big tech companies) is proving indispensable. Investments in further research and greater support for lifelong learning in this area clearly have a role to play in increasing resilience to malevolent AI use. Overall, a proactive approach is essential if we want to maximise the potential of AI while minimising its risks. Considering the need for the formation of socially acceptable approaches to modern disruptive technologies, keeping AI under human control is a major challenge.

Funding  This work received no external funding.

Competing interests  The authors declare no competing interests.

Footnote

[1]   EC, Artegodan v. Commission, Judgment of 26. November 2002, Case T-74/00 DEP.

References

Arndt, Birger (2009): Das Vorsorgeprinzip im EU-Recht. Tübingen: Mohr Siebeck.

AVex – Akademie věd České republiky (2023): Umělá inteligence. Available online at https://www.avcr.cz/export/sites/avcr.cz/cs/veda-a-vyzkum/avex/files/2023-01.pdf, last accessed on 15. 05. 2024.

CAUI – Česká asociace umělé intelligence (2024): Deepfake 2024. Obranná strategie pro české firmy. Available online at https://asociace.ai/wp-content/uploads/2023/12/DEEPFAKE-2024-CAUI.pdf, last accessed on 26. 04. 2024.

Drozenová, Wendy (2019): Jonasova etika techniky a její metafyzické ukotvení. In: Wendy Drozenová and Vojtěch Šimek (eds.): Filosofie Hanse Jonase. Praha: Filosofia, pp. 61–79.

DSIT – Department for Science, Innovation and Technology (2023): A pro-innovation approach to AI regulation. Available online at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, last accessed on 26. 04. 2024.

EC – European Commission (2019): Ethics guidelines for trustworthy AI. Available online at https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, last accessed on 26. 04. 2024.

EC (2023): European approach to artificial intelligence. Available online at https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence, last accessed on 26. 04. 2024.

Grmelová, Nicole (2022): Zásada předběžné opatrnosti v právu mezinárodního obchodu. Praha: C. H. Beck.

Ipsos (2023): AI ve finančních službách. Available online at https://www.ipsos.com/cs-cz, last accessed on 15. 05. 2024.

Knapp, Viktor (1956): Některé úvahy o odpovědnosti v občanském právu. In: Stát a právo 1, pp. 66–85.

Kopecký, Kamil (2023): Využití AI ve vysokoškolském vzdělávání. In: Kamil Kopecký Blog, 03. 09. 2023. Available online at https://kopeckykamil.cz/index.php/blog/351-vyuziti-umele-inteligence-ve-vysokoskolskem-vzdelavani-podpora-studentu, last accessed on 26. 04. 2024.

Kühn, Werner (2006): Die Entwicklung des Vorsorgeprinzips im Europarecht. In: Zeitschrift für europarechtliche Studien 9 (4), pp. 487–520. https://doi.org/10.5771/1435-439X-2006-4-487

Marchant, Gary; Mossman, Kenneth (2004): Arbitrary and capricious. The Precautionary Principle in the European Courts. Washington, DC: American Enterprise Institute for Public Policy Research. Available online at https://www.aei.org/wp-content/uploads/2011/11/20040917_MarchantNewG.pdf?x85095, last accessed on 16. 05. 2024.

Medicus, Dieter; Lorenz, Stephan (2008): Schuldrecht I. Allgemeiner Teil. München: C. H. Beck.

MIToCR – Ministry of Industry and Trade of the Czech Republic (2023): Vyhodnocení veřejné konzultace. Available online at https://www.mpo.cz/assets/cz/podnikani/digitalni-ekonomika/umela-inteligence/2023/10/Vyhodnoceni-verejne-konzultace-k-aktualizaci-Narodni-strategie-umele-inteligence.pdf, last accessed on, 26. 04. 2024.

Moravčík, Ondřej; Vinčálek, Jakub (2024): Vývoj registrované kriminality v roce 2023. In: Policie České republiky, 12. 01. 2024. Available online at https://www.policie.cz/clanek/vyvoj-registrovane-kriminality-v-roce-2023.aspx, last accessed on 24. 04. 2024.

NIST – National Institute of Standards and Technology (2023): Artificial intelligence risk management framework (AI RMF 1.0). Washington, DC: U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1

NÚKIB (2024): NÚKIB v roce 2023 zaznamenal rekordní počet kybernetických incidentů, 31. 01. 2024. Available online at https://nukib.gov.cz/cs/infoservis/aktuality/2073-nukib-v-roce-2023-zaznamenal-rekordni-pocet-kybernetickych-incidentu/, last accessed on 26. 04. 2024.

OECD – Organization for Economic Co-operation and Development (2023): The state of implementation of the OECD AI principles four years on. OECD Artificial Intelligence Papers 3. Paris: OECD Publishing. https://doi.org/10.1787/835641c9-en

Randall, Alan (2011): Risk and precaution. Cambridge, UK: Cambridge University Press.

Randstad HR Solutions s.r.o. (2023): Průzkum AI Trends 2023: Firmy kvůli umělé inteligenci propouštět neplánují. In: Randstad.cz, 26. 10. 2023. Available online at https://www.randstad.cz/o-nas/randstad-employer-brand-research/pruzkum-ai-trends-2023-firmy-kvuli-umele-inteligenci/, last accessed on 26. 04. 2024.

Sunstein, Cass (2002): Risk and reason. Safety, law, and the environment. Cambridge, UK: Cambridge University Press.

The White House (2022): Blueprint for an AI Bill of Rights. Available online at https://www.whitehouse.gov/ostp/ai-bill-of-rights, last accessed on 16. 05. 2024.

The White House (2023): President Biden issues Executive Order on safe, secure, and trustworthy artificial intelligence, 30. 10. 2023. Available online at https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/, last accessed on 26. 04. 2024.

Zibner, Jan (2022): Umělá inteligence jako technologická výzva autorskému právu. Praha: Wolters Kluwer.

Authors

Dr. Petr Machleidt

is an associate researcher at the Centre for Science, Technology, and Society Studies of the Institute of Philosophy of the Czech Academy of Sciences. His research interest is technology assessment; member of the CULTMEDIA project; 1995–2014 he lectured at the Czech Technical University in Prague on the subject of social assessment of technology.

Dr. Jitka Mráčková

is an assistant professor at the Law Department of the Czech University of Life Sciences in Prague; lecturing and research activities in civil law and commercial law, legal aspects of informatics with a focus on the protection of intellectual property, previously intellectual property law also at Academy of Performing Arts in Prague.

Dr. Karel Mráček

sets his focus of research on technology and innovation assessment, innovation policy and management; participation in the CULTMEDIA project. He lectures since 2015 at the Czech Technical University in Prague (guarantees the subject of social assessment of technology). Member of the Board of the Association of Research Organizations of the Czech Republic.