<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.rng">
<article xmlns:xlink="http://www.w3.org/1999/xlink"
         article-type="research-article"
         dtd-version="1.2"
         xml:lang="en"><?letex RNG_JATS-journalpublishing1-mathml3 ok?>
   <front>
      <journal-meta>
         <journal-id/>
         <journal-title-group>
            <journal-title>TATuP – Journal for Technology Assessment in Theory and Practice</journal-title>
         </journal-title-group>
         <issn pub-type="ppub">2568-020X</issn>
      </journal-meta>
      <article-meta>
         <article-id>7248</article-id>
         <article-id pub-id-type="doi">10.14512/tatup.7248</article-id>
         <article-categories>
            <subj-group>
               <subject>Research article</subject>
            </subj-group>
            <subj-group>
               <subject>Special topic · Technology assessment and future warfare: The Good, the Bad, and the Ugly</subject>
            </subj-group>
         </article-categories>
         <title-group>
            <article-title xml:lang="en">Artificial intelligence, semiconductors, and the new geopolitics of security</article-title>
            <subtitle xml:lang="en">Why technology assessment must engage in emerging military technologies</subtitle>
            <trans-title-group>
               <trans-title xml:lang="de">Künstliche Intelligenz, Halbleiter und die neue Geopolitik der Sicherheit</trans-title>
               <trans-subtitle xml:lang="de">Warum sich die Technikfolgenabschätzung mit neuen Militärtechnologien befassen muss</trans-subtitle>
            </trans-title-group>
         </title-group>
         <contrib-group>
            <contrib contrib-type="author" corresp="yes" id="Au1" xlink:href="#Aff1">
               <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-4281-7628</contrib-id>
               <name name-style="western">
                  <surname>Reinhold</surname>
                  <given-names>Thomas</given-names>
                  <prefix>Dr.</prefix>
               </name>
               <address>
                  <email>thomas.reinhold@prif.org</email>
               </address>
               <bio>
                  <boxed-text id="FPar1" specific-use="Style1">
                     <caption>
                        <title>DR. THOMAS REINHOLD</title>
                     </caption>
                     <p>is a researcher at the Research Department International Security and the Cluster for Natural and Technical Science Arms Control Research (CNTR) at PRIF, the Peace Research Institute Frankfurt. He conducts research on the militarization of cyberspace, AI and possibilities for arms control and disarmament of these technologies.</p>
                     <fig id="Figa">
                        <label/>
                        <caption>
                           <title/>
                        </caption>
                        <graphic specific-use="Print" xlink:href="910000_2025_7248_Figa_Print.eps"/>
                        <graphic specific-use="HTML" xlink:href="910000_2025_7248_Figa_HTML.png"/>
                     </fig>
                  </boxed-text>
               </bio>
               <aff id="Aff1">
                  <institution>Peace Research Institute Frankfurt (PRIF)</institution>
                  <addr-line>
                     <city>Frankfurt am Main</city>
                     <country>Germany</country>
                  </addr-line>
               </aff>
            </contrib>
         </contrib-group>
         <pub-date date-type="pub">
            <day>23</day>
            <month>03</month>
            <year>2026</year>
         </pub-date>
         <fpage>28</fpage>
         <lpage>34</lpage>
         <permissions>
            <copyright-year>2026</copyright-year>
            <copyright-holder>by the author(s); licensee oekom</copyright-holder>
            <license>
               <license-p>This Open Access article is published under a Creative Commons Attribution 4.0 International Licence (CC BY).</license-p>
            </license>
         </permissions>
         <abstract abstract-type="summary" id="Abs1" xml:lang="en">
            <title>Abstract</title>
            <p>This conceptual research article examines the accelerating militarization of emerging technologies, particularly artificial intelligence (AI) and semiconductors, and its implications for technology assessment (TA). It highlights how quickly commercial innovation, especially in the field of AI, is integrated into military applications, thereby eluding traditional assessment methods. Drawing on examples from Russia’s war against Ukraine and global semiconductor supply chains, the article argues for more agile, technically sound, and interdisciplinary TA approaches. It calls for assessment approaches that account for real-time deployment, dual-use dynamics, and geopolitical competition. Ultimately, it advocates further developing TA to remain relevant amid fast-moving security, technological, and strategic transformations.</p>
         </abstract>
         <abstract abstract-type="summary" id="Abs2" xml:lang="de">
            <title>Zusammenfassung</title>
            <p>Dieser konzeptionelle Forschungsartikel untersucht die zunehmende Militarisierung neuer Technologien, insbesondere von künstlicher Intelligenz (KI) und Halbleitern, und ihre Auswirkungen auf die Technikfolgenabschätzung (TA). Es wird aufgezeigt, wie schnell kommerzielle Innovationen, vor allem im Bereich der KI, in militärische Anwendungen integriert werden und sich dabei traditionellen Bewertungsmethoden entziehen können. Anhand von Beispielen aus dem russischen Angriffskrieg auf die Ukraine und den globalen Halbleiter-Lieferketten plädiert der Artikel für flexiblere, technisch fundierte und interdisziplinäre TA-Ansätze. Er fordert Analyseansätze, die den Einsatz in Echtzeit, die Dynamik des doppelten Verwendungszwecks und den geopolitischen Wettbewerb berücksichtigen. Letztlich empfiehlt er eine Weiterentwicklung der TA, um inmitten der schnelllebigen sicherheitspolitischen, technologischen und strategischen Umwälzungen relevant zu bleiben.</p>
         </abstract>
         <kwd-group>
            <compound-kwd>
               <compound-kwd-part content-type="code">heading</compound-kwd-part>
               <compound-kwd-part content-type="text">Keywords</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">artificial intelligence</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">semiconductors</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">technology assessment</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">emerging technologies</compound-kwd-part>
            </compound-kwd>
         </kwd-group>
      </article-meta>
      <notes>
         <sec sec-type="referencedarticle">
            <title/>
            <p>
               <italic>This article is part of the Special topic</italic> “Technology assessment and future warfare: The Good, the Bad, and the Ugly,” <italic>edited by K. Weber, M. Bresinsky. <ext-link xlink:href="https://doi.org/10.14512/tatup.7286">https://doi.org/10.14512/tatup.7286</ext-link>
               </italic>
            </p>
         </sec>
      </notes>
   </front>
   <body>
      <sec id="Sec1">
         <label>1</label>
         <title>The challenge of the militarization of emerging technologies</title>
         <p>The landscape of technological innovation is undergoing a profound transformation. So-called emerging technologies (Rotolo et al. <xref ref-type="bibr" rid="CR28">2015</xref>) are not just incrementally advancing; they are re-shaping the tempo, scope, and unpredictability of change – for example, the advancements in robotic autonomy (Knuhtsen et al. <xref ref-type="bibr" rid="CR99">2025</xref>) – thus calling upon the field of technology assessment (TA) to explore how to analyze the impact of this development. This article approaches the issue not from the perspective of a traditional TA scholar but from a practitioner at the intersection of technological foresight and political consulting for the areas of military cyber and artificial intelligence (AI) capabilities and the development of arms control measures. This vantage point is based on monitoring defense contractors’ demonstrations, strategic white papers, research and grant proposal tenders, or the technical specifications of chips, drones, and AI systems rather than an application of formal TA frameworks. And yet, the core question is comparable: What are the social and political implications of specific technologies, and what are policy recommendations for their responsible management?</p>
         <p>Nowhere are the challenges posed by emerging technologies more apparent than in military applications, where a qualitative content analysis and comparison of strategic and national security doctrines (Schörnig et al. <xref ref-type="bibr" rid="CR32">2024</xref>) reveal, that military realities have partially already overtaken theoretical considerations. Examples are the deployment of small (autonomous) drones or the real-time development and battlefield testing of AI enabled weapon systems. This dynamic of ‘on-the-fly’ militarization innovations underscores a critical shift. Emerging technologies are no longer being gradually integrated into military systems after long periods of assessment, prototyping, and strategic debate. Instead, they are often pushed into deployment amid ongoing conflicts (Slusher <xref ref-type="bibr" rid="CR33">2025</xref>), with a pace that outstrips existing methods for evaluating their broader implications. Meanwhile, normative debates like the discussion on meaningful human control of autonomous weapon systems (Sauer <xref ref-type="bibr" rid="CR31">2020</xref>) or the responsible use of military artificial intelligence (Afina and Persi Paoli <xref ref-type="bibr" rid="CR1">2024</xref>) are evolving alongside these innovations with the idea of keeping a ‘human in the loop’ as a guiding normative principle. Yet, especially as such systems grow more capable, these discussions are increasingly outpaced by developments on the ground, where semi- or fully autonomous systems are already being deployed in ways that blur the line between human judgment and machine execution (Watts and Bode <xref ref-type="bibr" rid="CR39">2023</xref>).</p>
         <p>This pace of contemporary technological change compresses the window of reaction time, which raises fundamental questions for the future of TA: If the goal is to inform decision-making and shape governance before technologies become entrenched, what does this mean when those technologies are already fielded before assessment begins? What kind of methodologies can remain relevant when the frontier of innovation is constantly shifting? And how can TA assess the impact of this development in a way that includes and considers parallel technological progress and inter-technological dependencies and influence?</p>
         <p>The following chapters use AI and semiconductor technologies as case studies to conceptually highlight the role of emerging technologies in shaping contemporary geopolitics and security. AI raises questions of autonomy, agency, and algorithmic opacity, while semiconductors underpin modern infrastructure and are increasingly central to global power competition. Together, these cases prompt us to consider whether new forms of anticipatory or adaptive assessment are needed to keep pace with technologies that evolve faster than our understanding.</p>
      </sec>
      <sec id="Sec2">
         <label>2</label>
         <title>The impact of commercial artificial intelligence on military transformation</title>
         <p>AI has rapidly become one of the central pillars of contemporary military innovation as it plays a foundational role in areas such as surveillance and threat detection (King <xref ref-type="bibr" rid="CR16">2024</xref>), real-time support for battlefield decision-making (Nadibaidze et al. <xref ref-type="bibr" rid="CR25">2024</xref>) and especially autonomous military systems with or without weapons (Mozur and Satariano <xref ref-type="bibr" rid="CR24">2024</xref>). This acceleration of AI into operational relevance is doing more than enhancing military capabilities. It is altering the tempo of warfare itself (Zeff <xref ref-type="bibr" rid="CR41">2025</xref>) and pushing decision-making toward real-time responsiveness, often supported or even initiated by machines. A key factor in this rapid integration is the dominant role of large, global technology companies in AI development. These firms – spanning the U.S., China, and, to some extent, Europe (Merle <xref ref-type="bibr" rid="CR20">2024</xref>) – drive progress in AI research. Although primarily aimed at consumer markets, their pace in funding and release cycles outpaces military research and development (Maslej <xref ref-type="bibr" rid="CR19">2025</xref>), making them natural feeders into defense applications. The dual pressure of return on investment and high demand from military and security organizations leads to the rapid embedding of successful AI models into security-relevant systems. Military planners are increasingly turning to commercial off-the-shelf solutions (Reuters <xref ref-type="bibr" rid="CR27">2025</xref>) and adapting them rather than building from scratch.</p>
         <p>The war in Ukraine offers vivid examples of this transformation. Both Ukrainian (Matlack et al. <xref ref-type="bibr" rid="CR18">2025</xref>) and Russian forces (Stepanenko <xref ref-type="bibr" rid="CR35">2025</xref>) have employed swarms of small drones for reconnaissance, targeting, and direct strikes. Many are equipped with autonomous navigation systems and, in some cases, rudimentary target identification or loitering behavior. Ukraine has launched successful drone attacks deep into Russian territory using modified commercial technologies, targeting strategic assets such as long-range bombers, airfields, and logistical hubs. These innovations are not developed through traditional procurement pipelines but are improvised, deployed, and iteratively improved on the battlefield (Bondar <xref ref-type="bibr" rid="CR5">2025</xref>). This cycle is measured in weeks, not years, with companies gathering hands-on experience directly in conflict zones (Loh <xref ref-type="bibr" rid="CR17">2025</xref>).</p>
         <p>One of the key drivers of this evolution is autonomous functionality (Suckau <xref ref-type="bibr" rid="CR36">2024</xref>), which – even when limited or semi-supervised – lets machines operate without direct human control. These autonomous vehicles, like unmanned/uncrewed aerial vehicles, unmanned/uncrewed ground vehicles or unmanned/uncrewed underwater vehicles are often introduced for pragmatic reasons, such as reducing response time, resisting electronic warfare, or extending the operation radius beyond reliable accessibility via traditional radio connections. Such capabilities rely heavily on AI and machine learning (Garikapati and Shetiya <xref ref-type="bibr" rid="CR15">2024</xref>) to provide the necessary degree of flexibility and adaptability of the vehicle’s operation in order to avoid obstacles, react to situational conditions, select and track targets, or engage them. Both Ukrainian and Russian forces have incorporated AI-driven systems in agile and adaptive ways. Ukraine has retrofitted commercial drones with AI (Collett-White et al. <xref ref-type="bibr" rid="CR9">2024</xref>), sometimes based on publicly available tools (Rudra <xref ref-type="bibr" rid="CR30">2025</xref>) whereas Russia is experimenting with AI-enhanced Shahed drones to improve target selection and tracking. A second major driver is the use of AI for decision-making and support (Bovet <xref ref-type="bibr" rid="CR6">2025</xref>). AI systems aggregate and process intelligence data at a scale and speed far beyond human capabilities. In Eastern Ukraine, trench warfare is increasingly complemented by AI-powered surveillance tools that scan enemy movements, simulate courses of action, and manage logistics (Ministry of Defence of Ukraine <xref ref-type="bibr" rid="CR21">2025</xref>). Algorithms coordinate drone patrols, optimize supply chains, and assist with casualty evacuation. AI is now embedded at every level of combat – from tactical ground operations to strategic command centers.</p>
         <p>Similar dynamics can be observed in Western defense initiatives like Germany’s ‘Zeitenwende.’ Start-ups such as Helsing have partnered to integrate AI in real-time combat systems, including drone coordination and jet dogfights (Sprenger <xref ref-type="bibr" rid="CR34">2025</xref>). U.S.-based Anduril builds autonomous drones, surveillance platforms, and intelligent sensors powered by AI to automate threat responses (O’Donnell <xref ref-type="bibr" rid="CR26">2024</xref>). These companies, structured like start-ups with flat hierarchies and agile processes, can rapidly meet military needs and often embrace the concept of ‘software defined defense’ (BMVg <xref ref-type="bibr" rid="CR10">2023</xref>), using off-the-shelf hardware with updatable software, which accelerates innovation.</p>
         <p content-type="eyecatcher" specific-use="Style2">The convergence of commercial innovation and military application presents a challenge for technology assessment.</p>
         <p>This convergence of commercial innovation and military application presents a challenge for TA. Traditional approaches – relying on long-term forecasting and stakeholder dialogue – struggle under the compressed timelines of AI development and deployment. TA must now function amid real-time testing and iteration and must address military AI not as a derivative of civilian tech but as co-developed through civilian market dynamics and military urgency.</p>
      </sec>
      <sec id="Sec3">
         <label>3</label>
         <title>Semiconductors: the backbone of artificial intelligence and subject of geopolitical competition</title>
         <p>While much of the public debate around artificial intelligence focuses on software, algorithms, and data, this explosive growth of AI would not have been possible without the underlying hardware: semiconductors. These microelectronic components are the backbone of AI, especially for current generation AI systems like large language models (LLM), as their training as well as their application requires huge quantities of computer chips. ChatGPT 5, for instance, the current state-of-the-art LLM from OpenAI, has – according to estimates – needed 200.000 processing chips just for training the system (Moss <xref ref-type="bibr" rid="CR23">2025</xref>) and uses even more for providing the system to end users and business applications over the coming years (Caswell <xref ref-type="bibr" rid="CR7">2025</xref>). Regardless of the exact amount, these numbers point to a clear direction, as nearly every advancement in machine learning, computer vision, autonomous navigation, or military IT applications like battlefield decision-support systems depends on processing vast volumes of data at high speed and low latency – something only possible through cutting-edge semiconductor technologies. In this sense, any discussion about the development of autonomous systems, AI, and its implications is also a discussion about semiconductors. From an analytical standpoint, semiconductors could serve as a prime focus area for TA. It offers the advantage of a specific technology with measurable and quantifiable parameters whose technical progress is relatively visible and public due to the intense competition among companies for market dominance in this domain.</p>
         <p>Semiconductors are among the most technically complex artifacts ever produced. The latest chips feature billions of transistors etched onto nanometer-scale silicon wafers using extreme ultraviolet (EUV) lithography – a process requiring a level of precision and purity that only a handful of actors worldwide can achieve. These chips enable AI computation, whether in the form of general-purpose graphics processing units designed for parallel computation, specialized tensor processing units optimized for AI workloads, application-specific integrated circuits custom-built for particular tasks, high-bandwidth memory chips, or ultra-fast networking semiconductors. Advances in these types of hardware – alongside breakthroughs in cooling systems, networking, and energy optimization – are occurring on ever-shorter cycles. A powerful illustration of this acceleration can be found in the evolution of AI models themselves. In late 2022, ChatGPT 1.0 demonstrated the potential of LLM for general-purpose dialogue and reasoning. Since then, the landscape has advanced dramatically: OpenAI’s Veo‑3 brings AI-generated video closer to photorealism; Meta’s Llama 3 and other frontier models now approach multi-modal understanding; and reasoning agents have started to move beyond static responses toward complex situational decision-making (Delovski <xref ref-type="bibr" rid="CR11">2024</xref>). Sam Altman, CEO of OpenAI, and others now openly discuss pathways toward artificial general intelligence – a form of AI capable of performing any intellectual task a human can, reflecting a generalized, human-like understanding and cognitive flexibility (Frazier et al. <xref ref-type="bibr" rid="CR13">2024</xref>). This step, although controversial in terms of technical feasibility, would redefine the boundaries between human and machine cognition. These leaps result in qualitative transformations – ‘capability jumps’ that redefine what is technologically possible.</p>
         <p>But these technological capabilities and their underlying complexity come with strategic fragility. The production of advanced semiconductors relies on highly globalized supply chains (Sullivan <xref ref-type="bibr" rid="CR37">2025</xref>), involving the design capabilities of U.S.-based firms like NVIDIA, AMD, and Intel; fabrication facilities concentrated in Taiwan, like Taiwan Semiconductor Manufacturing Company Limited; EU-based suppliers like the advanced semiconductor materials lithography (ASML) Holding, which produces the world’s only EUV lithography machines; and raw materials sourced from politically unstable regions, including rare earths from Africa and noble gases from Ukraine. This fragmented yet tightly interdependent supply chain means that geopolitical tensions can quickly disrupt the production and distribution of critical hardware. At the forefront of this competition are the United States and China (Frazier <xref ref-type="bibr" rid="CR14">2025</xref>), locked in a rapidly escalating technological rivalry. Both countries view dominance in AI and semiconductors not merely as a pathway to economic growth but as a national security imperative (Allen <xref ref-type="bibr" rid="CR2">2025</xref>). The U.S. has introduced sweeping export controls on high-end chips, chip design software, and semiconductor manufacturing equipment and pressured key equipment suppliers such as the Dutch company ASML to restrict maintenance and support for advanced lithography machines previously sold to China (Allen and Goldston <xref ref-type="bibr" rid="CR3">2025</xref>). These measures aim to limit China’s access to the technologies essential for training large AI models and building advanced military systems. In response, China has poured billions into developing a domestic semiconductor ecosystem and launched numerous state-supported AI labs and companies (Chang et al. <xref ref-type="bibr" rid="CR8">2025</xref>).</p>
         <p content-type="eyecatcher" specific-use="Style2">These technological capabilities and their underlying complexity come with strategic fragility.</p>
         <p>An example of this high-stakes contest is DeepSeek, a Chinese AI company that frequently draws attention for publishing models claimed to rival Western competitors (Baptista <xref ref-type="bibr" rid="CR4">2025</xref>). While the technical success behind such models remains debated (Rubstov <xref ref-type="bibr" rid="CR29">2025</xref>), this underscores the aggressiveness and volatility of the AI race, as models emerge rapidly – often backed by massive state or private investment – only to fragment, pivot, or vanish under geopolitical pressure, hardware constraints, or commercial hurdles. A particularly notable case occurred when DeepSeek announced its R1 model in early 2025, claiming competitive performance at significantly lower hardware costs. Nvidia’s stock price dropped approximately 17–18 % in a single day (Mortimer and Page <xref ref-type="bibr" rid="CR22">2025</xref>), as investors feared DeepSeek’s breakthroughs might reduce demand for Nvidia’s expensive hardware. Even though the loss was quickly offset, it illustrates the financial pressure created by high development costs and the influence of technical disruption in the AI ecosystem.</p>
         <p>Between the U.S. and China, Europe faces a complex challenge. It is deeply dependent on technologies across the global AI and semiconductor stack, from fabrication to cloud infrastructure. However, it also hosts some of the most critical chokepoints in global chip production. Companies like ASML in the Netherlands and Carl Zeiss SMT in Germany (key optical suppliers for ASML’s chipmaking machines) have become geopolitical actors due to their irreplaceable technologies (European Court of Auditors <xref ref-type="bibr" rid="CR12">2025</xref>). As the U.S. restricts exports to China, these European firms are drawn into compliance regimes reflecting U.S. interests over EU sovereignty, while simultaneously gaining leverage to strengthen Europe’s strategic position.</p>
         <p>For Taiwan, the main site of advanced chip manufacturing, these dependencies are not just economic – they are central to national security, forming what is known as the ‘Silicon Shield’ (Wu <xref ref-type="bibr" rid="CR40">2024</xref>). This concept, which highlights Taiwan’s centrality to global chipmaking, illustrates the interplay between technology and security policy, where the island’s semiconductor role acts as a deterrent to aggression, given that disruptions could trigger global crises and geopolitical escalation.</p>
         <p>Layered on top of these dynamics is the issue of rare earths and critical materials, which form the invisible substrate of both AI systems and chip production. China dominates the extraction and processing of many key inputs, including neodymium (used in magnets), gallium, and germanium – vital for semiconductors and sensors (Teer et al. <xref ref-type="bibr" rid="CR38">2024</xref>). The U.S. and Europe have responded with strategies to diversify supply chains, build processing capacity, and invest in recycling technologies. However, these efforts trail behind the scale and integration of China’s supply chains. Even with design and fab capacity in place, a material bottleneck could paralyze production.</p>
         <p>In sum, the AI-semiconductor-nexus is a geopolitical pressure point where technological evolution, economic rivalry, and military strategy converge. The ‘chip war’ is not only about industrial competition – it is about strategic autonomy, resilience, and first-mover advantage in future security architectures.</p>
      </sec>
      <sec id="Sec4">
         <label>4</label>
         <title>Recommendations for an effective and relevant technology assessment</title>
         <p>Being not a classic TA scholar, I rather approach technologies as a technician and analyst: focusing on what they can do now, where they’re headed, and how quickly these developments proceed. My analysis is based on qualitative content analysis of relevant national doctrines (Schörnig et al. <xref ref-type="bibr" rid="CR32">2024</xref>) and grounded in capability tracking, the assessment of technical limits, deployment timelines, and the identification of starting points for regulatory measures rather than an application of formal TA frameworks. That said, my vantage point reinforces the growing importance of TA as we are living through rapid technological shifts. AI, once seen as speculative, is starting to inform military decisions, and chip development outpaces procurement cycles. Conflicts like Ukraine’s are not only exposing these tools – they are actively shaping them. The key issue isn’t just capability, but speed of change and emerging strategic turning points.</p>
         <p>The problem I observe is the current mismatch between this pace and nature of innovation and the methodological tools used to assess them. Many analytical frameworks remain too slow, too linear, and too disconnected from the systems they aim to evaluate. They often rely on models of risk anticipation and societal deliberation that presuppose a certain amount of time, stability, and clarity of trajectory – all of which are increasingly absent in the domains of AI and semiconductors. What kind of assessment, then, might be better suited to this reality?</p>
         <p>First, analytical frameworks must be agile and more technologically grounded. This can be achieved for AI, for instance, by including the analysis of the technological architectures, the source code of systems, hardware constraints, and data dependencies. Taking these aspects into account could help to keep track of swift innovations that can shift the development directions or a change in the meaning of individual components of a technical system as a whole. For example, a currently cutting-edge AI system may become quickly obsolete through changes in chip efficiency, bandwidth limitations, or model advancements of competitors. These are deeply technical issues, but their consequences can have a strong impact in terms of geopolitical power and the necessary policy adjustments.</p>
         <p>Second, assessments must actively engage with security policy and defense strategy, taking into account the dual-use nature of emerging technologies. As discussed earlier, technologies developed for consumer purposes – such as image recognition or predictive analytics – are getting adapted by military companies, weaponized, or integrated into military decision-making systems. The current AI ecosystem underlines, that commercial companies are no longer just a part of a bigger picture, but the core innovator, driver, and provider of security-relevant technologies, thus becoming the rule-setter of what capabilities and limitations exist.</p>
         <p content-type="eyecatcher" specific-use="Style2">The key issue isn’t just capability, but speed of change and emerging strategic turning points.</p>
         <p>Third, analytical assessments need to draw on interdisciplinary expertise. This includes fields that are often underrepresented in such contexts: computer science and semiconductor engineering. Only the collaboration between these disciplines can track the full lifecycle of a technology, from initial development to deployment and adaptation in scenarios – including their impact analysis. This also enables the identification of critical supply-chain dependencies, such as those discussed in the context of chip manufacturing or rare earth supplies – dependencies that may be invisible to purely social or ethical assessments but decisive in times of crisis. Of course, interdisciplinary work is no one-way responsibility but requires an equal willingness to cooperate on the part of the engineering and natural sciences. This includes finding a common language, understanding and accepting the concepts and constraints of politics, and the willingness to evaluate one’s own scientific achievements in a social and security-related context. Within this context, TA has a unique opportunity to help classify and prioritize emerging technologies by providing structured analysis that highlights critical chokepoints, infrastructure bottlenecks, or design vulnerabilities – insights that are urgently needed for political decision-making.</p>
         <p>Finally, this means that TA should adapt its principles to a new landscape. Participatory and normative approaches still matter, especially in defining acceptable use conditions and governance frameworks. But these approaches must be integrated with real-time capability tracking, technical scenario modeling, and system-level diagnostics, thus becoming more iterative, more exploratory, and more strategically aware.</p>
         <p>I do not claim to have the methodological blueprint for this transformation, nor come these recommendations from an empirical study. What I can offer are insights from my fieldwork and technical analysis, hoping that these fragments might help to point out where TA might need to stretch, collaborate, and evolve in a world where emerging technological changes are no longer incremental but exponential – and where their consequences are being written in real time on the battlefield, in code, and in silicon.</p>
      </sec>
   </body>
   <back>
      <ack>
         <p>
            <boxed-text id="FPar2" specific-use="Style1">
               <caption>
                  <title>Funding</title>
               </caption>
               <p>This article received no funding.</p>
            </boxed-text>
         </p>
         <p>
            <boxed-text id="FPar3" specific-use="Style1">
               <caption>
                  <title>Competing interests</title>
               </caption>
               <p>The author declares no competing interests.</p>
            </boxed-text>
         </p>
         <p>
            <boxed-text id="FPar4" specific-use="Style1">
               <caption>
                  <title>Ethical oversight</title>
               </caption>
               <p>The author confirms that all procedures were performed in compliance with relevant laws and institutional guidelines.</p>
            </boxed-text>
         </p>
      </ack>
      <ref-list id="Bib1">
         <title>References</title>
         <ref id="CR1">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Afina</surname>
                        <given-names>Y</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Persi Paoli</surname>
                        <given-names>G</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <comment>https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Governance of artificial intelligence in the military domain. A multi-stakeholder perspective on priority areas</source>
                  <publisher-name>UNIDIR</publisher-name>
                  <publisher-loc>Geneva</publisher-loc>
               </element-citation>
               <mixed-citation>Afina, Yasmin; Persi Paoli, Giacomo (2024): Governance of artificial intelligence in the military domain. A multi-stakeholder perspective on priority areas. Geneva: UNIDIR. Available online at <ext-link xlink:href="https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/">https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR2">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Allen</surname>
                     <given-names>G</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">DeepSeek, Huawei, export controls, and the future of the U.S.-China AI Race</source>
                  <publisher-name>CSIS</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Allen, Gregory (2025): DeepSeek, Huawei, export controls, and the future of the U.S.-China AI Race. Washington, DC: CSIS. Available online at <ext-link xlink:href="https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race">https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR3">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Allen</surname>
                        <given-names>G</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Goldston</surname>
                        <given-names>I</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://www.csis.org/analysis/understanding-us-allies-current-legal-authority-implement-ai-and-semiconductor-export, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Understanding U.S. allies’ current legal authority to implement AI and semiconductor export controls</source>
                  <publisher-name>CSIS</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Allen, Gregory; Goldston, Isaac (2025): Understanding U.S. allies’ current legal authority to implement AI and semiconductor export controls. Washington, DC: CSIS. Available online at <ext-link xlink:href="https://www.csis.org/analysis/understanding-us-allies-current-legal-authority-implement-ai-and-semiconductor-export">https://www.csis.org/analysis/understanding-us-allies-current-legal-authority-implement-ai-and-semiconductor-export</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR4">
            <mixed-citation>Baptista, Eduardo (2025): Explainer. What is DeepSeek and why is it disrupting the AI sector? In: Reuters, 28.01.2025. Available online at <ext-link xlink:href="https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/">https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR10">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <string-name>BMVg – Bundesministerium der Verteidigung</string-name>
                  <date>
                     <year>2023</year>
                  </date>
                  <comment>https://www.bmvg.de/resource/blob/5711942/6fb70a45412601fdf03f63aeebf72451/cyber-defined-defence-papier-data.pdf, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Software defined defence. Positionspapier des BDSV, BDLI, Bitkom und BMVg</source>
                  <publisher-name>Bundesministerium der Verteidigung</publisher-name>
                  <publisher-loc>Berlin</publisher-loc>
               </element-citation>
               <mixed-citation>BMVg – Bundesministerium der Verteidigung (2023): Software defined defence. Positionspapier des BDSV, BDLI, Bitkom und BMVg. Berlin: Bundesministerium der Verteidigung. Available online at <ext-link xlink:href="https://www.bmvg.de/resource/blob/5711942/6fb70a45412601fdf03f63aeebf72451/cyber-defined-defence-papier-data.pdf">https://www.bmvg.de/resource/blob/5711942/6fb70a45412601fdf03f63aeebf72451/cyber-defined-defence-papier-data.pdf</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR5">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Bondar</surname>
                     <given-names>K</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Ukraine’s future vision and current capabilities for waging AI-enabled autonomous warfare</source>
                  <publisher-name>CSIS</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Bondar, Kateryna (2025): Ukraine’s future vision and current capabilities for waging AI-enabled autonomous warfare. Washington, DC: CSIS. Available online at <ext-link xlink:href="https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare">https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR6">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Bovet</surname>
                     <given-names>P</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <source content-type="BookTitle">Exploring decision advantages. Improving speed, precision and efficiency in military targeting by applying artificial intelligence</source>
                  <publisher-name>Swedish Defence University</publisher-name>
                  <publisher-loc>Stockholm</publisher-loc>
                  <volume-id content-type="bibbookdoi">10.62061/leiy7136</volume-id>
               </element-citation>
               <mixed-citation>Bovet, Peter (2025): Exploring decision advantages. Improving speed, precision and efficiency in military targeting by applying artificial intelligence. Stockholm: Swedish Defence University. <ext-link xlink:href="https://doi.org/10.62061/leiy7136">https://doi.org/10.62061/leiy7136</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR7">
            <mixed-citation>Caswell, Amanda (2025): Why OpenAI wants 100 million GPUs – and how it could supercharge ChatGPT. In: Tom’s Guide, 22.07.2025. Available online at <ext-link xlink:href="https://www.tomsguide.com/ai/sam-altmans-trillion-dollar-ai-vision-starts-with-100-million-gpus-heres-what-that-means-for-the-future-of-chatgpt-and-you">https://www.tomsguide.com/ai/sam-altmans-trillion-dollar-ai-vision-starts-with-100-million-gpus-heres-what-that-means-for-the-future-of-chatgpt-and-you</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR8">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Chang</surname>
                        <given-names>W</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Arcesati</surname>
                        <given-names>R</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Hmaidi</surname>
                        <given-names>A</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://merics.org/en/report/chinas-drive-toward-self-reliance-artificial-intelligence-chips-large-language-models, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">China’s drive toward self-reliance in artificial intelligence</source>
                  <publisher-name>Mercator Institute for China Studies</publisher-name>
                  <publisher-loc>Berlin</publisher-loc>
               </element-citation>
               <mixed-citation>Chang, Wendy; Arcesati, Rebecca; Hmaidi, Antonia (2025): China’s drive toward self-reliance in artificial intelligence. Berlin: Mercator Institute for China Studies. Available online at <ext-link xlink:href="https://merics.org/en/report/chinas-drive-toward-self-reliance-artificial-intelligence-chips-large-language-models">https://merics.org/en/report/chinas-drive-toward-self-reliance-artificial-intelligence-chips-large-language-models</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR9">
            <mixed-citation>Collett-White, Mike; Dutta, Prasanta; Zafra, Mariano (2024): How Ukraine pulled off an audacious drone attack deep inside Russia. Months of planning went into a covert operation that relied on cheap, short-range drones. In: Reuters, 05.06.2025. Available online at <ext-link xlink:href="https://www.reuters.com/graphics/UKRAINE-CRISIS/DRONES-RUSSIA/mypmjzayyvr/">https://www.reuters.com/graphics/UKRAINE-CRISIS/DRONES-RUSSIA/mypmjzayyvr/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR11">
            <mixed-citation>Delovski, Boris (2024): A brief history of GPT models. Available online at <ext-link xlink:href="https://www.edlitera.com/blog/posts/gpt-models-history">https://www.edlitera.com/blog/posts/gpt-models-history</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR12">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <string-name>European Court of Auditors</string-name>
                  <date>
                     <year>2025</year>
                  </date>
                  <source content-type="BookTitle">The EU’s strategy for microchips</source>
                  <publisher-name>Publications Office of the European Union</publisher-name>
                  <publisher-loc>Luxembourg</publisher-loc>
                  <volume-id content-type="bibbookdoi">10.2865/0902427</volume-id>
               </element-citation>
               <mixed-citation>European Court of Auditors (2025): The EU’s strategy for microchips. Luxembourg: Publications Office of the European Union. <ext-link xlink:href="https://doi.org/10.2865/0902427">https://doi.org/10.2865/0902427</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR14">
            <mixed-citation>Frazier, Kevin (2025): What comes next in AI regulation? In: Lawfare, 28.07.2025. Available online at <ext-link xlink:href="https://www.lawfaremedia.org/article/what-comes-next-in-ai-regulation">https://www.lawfaremedia.org/article/what-comes-next-in-ai-regulation</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR13">
            <mixed-citation>Frazier, Kevin; Rozenshtein, Alan; Salib, Peter (2024): OpenAI’s latest model shows AGI is inevitable. Now what? In: Lawfare, 23.12.2024. Available online at <ext-link xlink:href="https://www.lawfaremedia.org/article/openai">https://www.lawfaremedia.org/article/openai</ext-link>’s-latest-model-shows-agi-is-inevitable.-now-what, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR15">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Garikapati</surname>
                        <given-names>D</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Shetiya</surname>
                        <given-names>S</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <source content-type="BookTitle">Autonomous vehicles. Evolution of artificial intelligence and learning algorithms</source>
                  <series>arXiv</series>
                  <volume-id content-type="bibbookdoi">10.48550/arXiv.2402.17690</volume-id>
               </element-citation>
               <mixed-citation>Garikapati, Divya; Shetiya, Sneha (2024): Autonomous vehicles. Evolution of artificial intelligence and learning algorithms. In: arXiv. <ext-link xlink:href="https://doi.org/10.48550/arXiv.2402.17690">https://doi.org/10.48550/arXiv.2402.17690</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR16">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>King</surname>
                     <given-names>A</given-names>
                  </name>
                  <date>
                     <year>2024</year>
                  </date>
                  <article-title>Digital targeting. Artificial intelligence, data, and military intelligence</article-title>
                  <issue>2</issue>
                  <page-range>ogae009</page-range>
                  <volume-id content-type="bibarticledoi">10.1093/jogss/ogae009</volume-id>
                  <source content-type="journal">Journal of Global Security Studies</source>
                  <volume>9</volume>
               </element-citation>
               <mixed-citation>King, Anthony (2024): Digital targeting. Artificial intelligence, data, and military intelligence. In: Journal of Global Security Studies 9 (2), p. ogae009. <ext-link xlink:href="https://doi.org/10.1093/jogss/ogae009">https://doi.org/10.1093/jogss/ogae009</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR99">
            <mixed-citation>Knuhtsen, Reyk; Patel, Dylan; Ciminelli, Niko; Ryu, Joe; Ghilduta, Robert; Ontiveros, Jeremie Eliahou (2025): Robotics Levels of Autonomy. SemiAnalysis, published on 30 July 2025. Available online at <ext-link xlink:href="https://newsletter.semianalysis.com/p/robotics-levels-of-autonomy">https://newsletter.semianalysis.com/p/robotics-levels-of-autonomy</ext-link>, last accessed on 04.02.2026</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR17">
            <mixed-citation>Loh, Matthew (2025): Western arms makers can now live-test their prototype weapons on the battlefield against Russia’s forces. In: Business Insider, 18.07.2025. Available online at <ext-link xlink:href="https://www.businessinsider.com/ukraine-testing-western-weapon-prototypes-russian-forces-combat-2025-7">https://www.businessinsider.com/ukraine-testing-western-weapon-prototypes-russian-forces-combat-2025-7</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR19">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="editor">
                     <surname>Maslej</surname>
                     <given-names>N</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://hai.stanford.edu/ai-index/2025-ai-index-report, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Artificial intelligence index report 2025</source>
                  <publisher-name>Stanford Institute for Human-Centered Artificial Intelligence (HAI)</publisher-name>
                  <publisher-loc>Stanford</publisher-loc>
               </element-citation>
               <mixed-citation>Maslej, Nestor (ed.) (2025): Artificial intelligence index report 2025. Stanford, CA: Stanford Institute for Human-Centered Artificial Intelligence (HAI). Available online at <ext-link xlink:href="https://hai.stanford.edu/ai-index/2025-ai-index-report">https://hai.stanford.edu/ai-index/2025-ai-index-report</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR18">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Matlack</surname>
                        <given-names>J-W</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Schwartz</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Gill</surname>
                        <given-names>O</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://www.lse.ac.uk/ideas/publications/Research-Reports/Ukraine’s-Drone-Ecosystem-and-the-Defence-of-Europe-Lessons-Lost-Can’t-be-Learned, last accessed on 24.01.2025</comment>
                  <source content-type="BookTitle">Ukraine’s drone ecosystem and the defence of Europe. Lessons lost can’t be learned</source>
                  <publisher-name>LSE IDEAS</publisher-name>
                  <publisher-loc>London</publisher-loc>
               </element-citation>
               <mixed-citation>Matlack, Jon-Wyatt; Schwartz, Sebastian; Gill, Oliver (2025): Ukraine’s drone ecosystem and the defence of Europe. Lessons lost can’t be learned. London: LSE IDEAS. Available online at <ext-link xlink:href="https://www.lse.ac.uk/ideas/publications/Research-Reports/Ukraine">https://www.lse.ac.uk/ideas/publications/Research-Reports/Ukraine</ext-link>’s-Drone-Ecosystem-and-the-Defence-of-Europe-Lessons-Lost-Can’t-be-Learned, last accessed on 24.01.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR20">
            <citation-alternatives>
               <element-citation publication-type="chapter">
                  <name content-type="author">
                     <surname>Merle</surname>
                     <given-names>Q</given-names>
                  </name>
                  <date>
                     <year>2024</year>
                  </date>
                  <chapter-title>Chips supply chain. Bifurcation and localization</chapter-title>
                  <volume-id content-type="bibchapterdoi">10.3929/ETHZ-B-000680146</volume-id>
                  <source content-type="BookTitle">CSS Analyses in Security Policy</source>
               </element-citation>
               <mixed-citation>Merle, Quentin (2024): Chips supply chain. Bifurcation and localization. In: CSS Analyses in Security Policy. <ext-link xlink:href="https://doi.org/10.3929/ETHZ-B-000680146">https://doi.org/10.3929/ETHZ-B-000680146</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR21">
            <mixed-citation>Ministry of Defence of Ukraine (2025): Enemy equipment detected in 2 seconds. The ministry of defence showcased delta and Avengers systems at the London defence conference. Available online at <ext-link xlink:href="https://mod.gov.ua/en/news/enemy-equipment-detected-in-2-seconds-the-ministry-of-defence-showcased-delta-and-avengers-systems-at-the-london-defence-conference">https://mod.gov.ua/en/news/enemy-equipment-detected-in-2-seconds-the-ministry-of-defence-showcased-delta-and-avengers-systems-at-the-london-defence-conference</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR22">
            <mixed-citation>Mortimer, Ian; Page, Matthew (2025): How has DeepSeek affected the AI market for investors? In: Guinness Global Investors, 24.02.2025. Available online at <ext-link xlink:href="https://www.guinnessgi.com/insights/how-has-deepseek-affected-ai-market-investors">https://www.guinnessgi.com/insights/how-has-deepseek-affected-ai-market-investors</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR23">
            <mixed-citation>Moss, Sebastian (2025): OpenAI says its compute increased 15x since 2024, company used 200k GPUs for GPT‑5. As company releases its latest generative AI model. In: Data Center Dynamics, 07.08.2025. Available online at <ext-link xlink:href="https://www.datacenterdynamics.com/en/news/openai-says-its-compute-increased-15x-since-2024-company-used-200k-gpus-for-gpt-5/">https://www.datacenterdynamics.com/en/news/openai-says-its-compute-increased-15x-since-2024-company-used-200k-gpus-for-gpt-5/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR24">
            <mixed-citation>Mozur, Paul; Satariano, Adam (2024): A.I. begins ushering in an age of killer robots. In: The New York Times, 02.07.2024. Available online at <ext-link xlink:href="https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html">https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR25">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Nadibaidze</surname>
                        <given-names>A</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Bode</surname>
                        <given-names>I</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Qiaochu</surname>
                        <given-names>Z</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <comment>https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2024/ai-gammel, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">AI in military decision support systems. A review of developments and debates</source>
                  <publisher-name>Center for War Studies</publisher-name>
                  <publisher-loc>Odense</publisher-loc>
               </element-citation>
               <mixed-citation>Nadibaidze, Anna; Bode, Ingvild; Zhang, Qiaochu (2024): AI in military decision support systems. A review of developments and debates. Odense: Center for War Studies. Available online at <ext-link xlink:href="https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2024/ai-gammel">https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2024/ai-gammel</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR26">
            <mixed-citation>O’Donnell, James (2024): We saw a demo of the new AI system powering Anduril’s vision for war. In: MIT Technology Review. Available online at <ext-link xlink:href="https://www.technologyreview.com/2024/12/10/1108354/we-saw-a-demo-of-the-new-ai-system-powering-andurils-vision-for-war/">https://www.technologyreview.com/2024/12/10/1108354/we-saw-a-demo-of-the-new-ai-system-powering-andurils-vision-for-war/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR27">
            <mixed-citation>Reuters (2025): US defense department awards contracts to Google, Musk’s xAI. In: Reuters, 14.07.2025. Available online at <ext-link xlink:href="https://www.reuters.com/business/autos-transportation/us-department-defense-awards-contracts-google-xai-2025-07-14/">https://www.reuters.com/business/autos-transportation/us-department-defense-awards-contracts-google-xai-2025-07-14/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR28">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Rotolo</surname>
                        <given-names>D</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Hicks</surname>
                        <given-names>D</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Martin</surname>
                        <given-names>B</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2015</year>
                  </date>
                  <article-title>What is an emerging technology?</article-title>
                  <issue>10</issue>
                  <page-range>1827–1843</page-range>
                  <volume-id content-type="bibarticledoi">10.1016/j.respol.2015.06.006</volume-id>
                  <source content-type="journal">Research Policy</source>
                  <volume>44</volume>
               </element-citation>
               <mixed-citation>Rotolo, Daniele; Hicks, Diana; Martin, Ben (2015): What is an emerging technology? In: Research Policy 44 (10), S. 1827–1843. <ext-link xlink:href="https://doi.org/10.1016/j.respol.2015.06.006">https://doi.org/10.1016/j.respol.2015.06.006</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR29">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Rubstov</surname>
                     <given-names>A</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://globalriskinstitute.org/publication/ai-supply-chain-shocks-insights-from-deepseek-r1/, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">AI supply chain shocks. Insights from DeepSeek R1</source>
                  <publisher-name>Global Risk Institute. Available online at</publisher-name>
                  <publisher-loc>Toronto</publisher-loc>
               </element-citation>
               <mixed-citation>Rubstov, Alexey (2025): AI supply chain shocks. Insights from DeepSeek R1. Toronto: Global Risk Institute. Available online at <ext-link xlink:href="https://globalriskinstitute.org/publication/ai-supply-chain-shocks-insights-from-deepseek-r1/">https://globalriskinstitute.org/publication/ai-supply-chain-shocks-insights-from-deepseek-r1/</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR30">
            <mixed-citation>Rudra, Sourav (2025): This open source software was used in Ukraine’s drone attack on Russia. Is this a turning point for open source software in warfare? In: Its FOSS, 04.06.2025. Available online at <ext-link xlink:href="https://news.itsfoss.com/open-source-drone-attack/">https://news.itsfoss.com/open-source-drone-attack/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR31">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Sauer</surname>
                     <given-names>F</given-names>
                  </name>
                  <date>
                     <year>2020</year>
                  </date>
                  <article-title>Stepping back from the brink. Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible</article-title>
                  <issue>913</issue>
                  <page-range>235–259</page-range>
                  <volume-id content-type="bibarticledoi">10.1017/S1816383120000466</volume-id>
                  <source content-type="journal">International Review of the Red Cross</source>
                  <volume>102</volume>
               </element-citation>
               <mixed-citation>Sauer, Frank (2020): Stepping back from the brink. Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible. In: International Review of the Red Cross 102 (913), pp. 235–259. <ext-link xlink:href="https://doi.org/10.1017/S1816383120000466">https://doi.org/10.1017/S1816383120000466</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR32">
            <citation-alternatives>
               <element-citation publication-type="chapter">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Schörnig</surname>
                        <given-names>N</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Suckau</surname>
                     </name>
                     <name content-type="author">
                        <surname>Liska</surname>
                     </name>
                     <name content-type="author">
                        <surname>Korkusuz</surname>
                        <given-names>A</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Reinhold</surname>
                        <given-names>T</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <chapter-title>Emerging disruptive technologies. Neue Militärtechnologien in nationalen Sicherheitsstrategien – eine vergleichende Analyse</chapter-title>
                  <page-range>58–77</page-range>
                  <volume-id content-type="bibchapterdoi">10.48809/cntr2024</volume-id>
                  <source content-type="BookTitle">CNTR Monitor. 2024. Perspektiven auf Dual Use</source>
                  <publisher-name>Peace Research Institute</publisher-name>
                  <publisher-loc>Frankfurt a. M.</publisher-loc>
               </element-citation>
               <mixed-citation>Schörnig, Niklas; Suckau, Liska; Korkusuz, Abdullah; Reinhold, Thomas (2024): Emerging disruptive technologies. Neue Militärtechnologien in nationalen Sicherheitsstrategien – eine vergleichende Analyse. In: CNTR Monitor. 2024. Perspektiven auf Dual Use. Frankfurt a. M.: Peace Research Institute, pp. 58–77. <ext-link xlink:href="https://doi.org/10.48809/cntr2024">https://doi.org/10.48809/cntr2024</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR33">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Slusher</surname>
                     <given-names>M</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://www.csis.org/analysis/lessons-ukraine-conflict-modern-warfare-age-autonomy-information-and-resilience, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Lessons from the Ukraine conflict. Modern warfare in the age of autonomy, information, and resilience</source>
                  <publisher-name>CSIS</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Slusher, Matthew (2025): Lessons from the Ukraine conflict. Modern warfare in the age of autonomy, information, and resilience. Washington, DC: CSIS. Available online at <ext-link xlink:href="https://www.csis.org/analysis/lessons-ukraine-conflict-modern-warfare-age-autonomy-information-and-resilience">https://www.csis.org/analysis/lessons-ukraine-conflict-modern-warfare-age-autonomy-information-and-resilience</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR34">
            <mixed-citation>Sprenger, Sebastian (2025): Saab, Helsing let Gripen fighter fly with AI in charge. In: Defense News, 11.06.2025. Available online at <ext-link xlink:href="https://www.defensenews.com/global/europe/2025/06/11/saab-helsing-let-gripen-fighter-fly-with-ai-in-charge/">https://www.defensenews.com/global/europe/2025/06/11/saab-helsing-let-gripen-fighter-fly-with-ai-in-charge/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
         <ref id="CR35">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Stepanenko</surname>
                     <given-names>K</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <comment>https://understandingwar.org/research/russia-ukraine/the-battlefield-ai-revolution-is-not-here-yet-the-status-of-current-russian-and-ukrainian-ai-drone-efforts/, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">The battlefield AI revolution is not here yet. The status of current Russian and Ukrainian AI drone efforts</source>
                  <publisher-name>Institute for the Study of War</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Stepanenko, Kateryna (2025): The battlefield AI revolution is not here yet. The status of current Russian and Ukrainian AI drone efforts. Washington, DC: Institute for the Study of War. Available online at <ext-link xlink:href="https://understandingwar.org/research/russia-ukraine/the-battlefield-ai-revolution-is-not-here-yet-the-status-of-current-russian-and-ukrainian-ai-drone-efforts/">https://understandingwar.org/research/russia-ukraine/the-battlefield-ai-revolution-is-not-here-yet-the-status-of-current-russian-and-ukrainian-ai-drone-efforts/</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR36">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Suckau</surname>
                     <given-names>L</given-names>
                  </name>
                  <date>
                     <year>2024</year>
                  </date>
                  <source content-type="BookTitle">The limits of autonomy. Critically assessing factors limiting full autonomy of military uncrewed ground vehicles</source>
                  <publisher-name>Peace Research Institute Frankfurt</publisher-name>
                  <publisher-loc>Frankfurt a. M.</publisher-loc>
                  <volume-id content-type="bibbookdoi">10.48809/prifspot2406</volume-id>
               </element-citation>
               <mixed-citation>Suckau, Liska (2024): The limits of autonomy. Critically assessing factors limiting full autonomy of military uncrewed ground vehicles. Frankfurt a. M.: Peace Research Institute Frankfurt. <ext-link xlink:href="https://doi.org/10.48809/prifspot2406">https://doi.org/10.48809/prifspot2406</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR37">
            <mixed-citation>Sullivan, Helen (2025): A journey through the hyper-political world of microchips. In: The Guardian, 28.02.2025. Available online at <ext-link xlink:href="https://www.theguardian.com/technology/2025/feb/28/inside-the-mind-bending-tin-blasting-and-hyper-political-world-of-microchips">https://www.theguardian.com/technology/2025/feb/28/inside-the-mind-bending-tin-blasting-and-hyper-political-world-of-microchips</ext-link>, last accessed on 10.11.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR38">
            <mixed-citation>Teer, Joris; Seaman, John; Caruso, Alessia (2024): EUISS intra-EU workshop outcomes. Starting with the end in mind. De-risked gallium, germanium, and rare earth value chains by 2030. Paris, France, 09.12.2024. Available online at <ext-link xlink:href="https://www.iss.europa.eu/sites/default/files/2025-03/EUISS%20workshop%20outcomes%3B%20De-risked%20Ga%20Ge%20and%20REE%20value%20chains%20-%20FINAL.pdf">https://www.iss.europa.eu/sites/default/files/2025-03/EUISS%20workshop%20outcomes%3B%20De-risked%20Ga%20Ge%20and%20REE%20value%20chains%20-%20FINAL.pdf</ext-link>, last accessed on 24.11.2025.</mixed-citation>
         </ref>
         <ref id="CR39">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Watts</surname>
                        <given-names>T</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Bode</surname>
                        <given-names>I</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2023</year>
                  </date>
                  <comment>https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2023/loitering-munitions-unpredictability, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">Loitering munitions and unpredictability. Autonomy in weapon systems and challenges to human control</source>
                  <publisher-name>Center for War Studies</publisher-name>
                  <publisher-loc>Odense</publisher-loc>
               </element-citation>
               <mixed-citation>Watts, Tom; Bode, Ingvild (2023): Loitering munitions and unpredictability. Autonomy in weapon systems and challenges to human control. Odense: Center for War Studies. Available online at <ext-link xlink:href="https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2023/loitering-munitions-unpredictability">https://www.sdu.dk/en/forskning/forskningsenheder/samf/cws/cws-activities/2023/loitering-munitions-unpredictability</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR40">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Wu</surname>
                     <given-names>E</given-names>
                  </name>
                  <date>
                     <year>2024</year>
                  </date>
                  <comment>https://www.usip.org/publications/2024/01/silicon-shield-looking-beyond-semiconductors, last accessed on 24.11.2025</comment>
                  <source content-type="BookTitle">‘Silicon Shield’. Looking beyond semiconductors</source>
                  <publisher-name>United States Institute of Peace</publisher-name>
                  <publisher-loc>Washington, DC</publisher-loc>
               </element-citation>
               <mixed-citation>Wu, Emily (2024): ‘Silicon Shield’. Looking beyond semiconductors. Washington, DC: United States Institute of Peace. Available online at <ext-link xlink:href="https://www.usip.org/publications/2024/01/silicon-shield-looking-beyond-semiconductors">https://www.usip.org/publications/2024/01/silicon-shield-looking-beyond-semiconductors</ext-link>, last accessed on 24.11.2025.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR41">
            <mixed-citation>Zeff, Maxwell (2025): The Pentagon says AI is speeding up its ‘kill chain’. In: TechCrunch 09.01.2025. Available online at <ext-link xlink:href="https://techcrunch.com/2025/01/19/the-pentagon-says-ai-is-speeding-up-its-kill-chain/">https://techcrunch.com/2025/01/19/the-pentagon-says-ai-is-speeding-up-its-kill-chain/</ext-link>, last accessed on 17.10.2025.</mixed-citation>
         </ref>
      </ref-list>
   </back>
</article>
