RESEARCH ARTICLE
Jan C. Zoellick*, 1 , Hans Drexler2 , Konstantin Drexler3
* Corresponding author: jan.zoellick@charite.de
1 Institute of Medical Sociology and Rehabilitation Science, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, DE
2 Institut und Poliklinik für Arbeits-, Sozial- und Umweltmedizin, FAU Erlangen-Nürnberg, Erlangen, DE
3 Department for Dermatology, University Hospital Regensburg, Regensburg, DE
Abstract • Tools based on machine learning (so-called artificial intelligence, AI) are increasingly being developed to diagnose malignant melanoma in dermatology. This contribution discusses (1) three scenarios for the use of AI in different medical settings, (2) shifts in competencies from dermatologists to non-specialists and empowered patients, (3) regulatory frameworks to ensure safety and effectiveness and their consequences for AI tools, and (4) cognitive dissonance and potential delegation of human decision-making to AI. We conclude that AI systems should not replace human medical expertise but play a supporting role. We identify needs for regulation and provide recommendations for action to help all (human) actors navigate safely through the choppy waters of this emerging market. Potential dilemmas arise when AI tools provide diagnoses that conflict with human medical expertise. Reconciling these conflicts will be a major challenge.
Zusammenfassung • Für die Diagnose von malignen Melanomen in der Dermatologie werden zunehmend Instrumente entwickelt, die auf maschinellem Lernen (sogenannter künstlicher Intelligenz, KI) basieren. Dieser Beitrag diskutiert (1) drei Szenarien für den Einsatz von KI in verschiedenen medizinischen Bereichen, (2) Kompetenzverschiebungen von Dermatolog:innen zu Nicht-Spezialist:innen und mündigen Patient:innen, (3) regulatorische Rahmenbedingungen zur Gewährleistung von Wirksamkeit und Unbedenklichkeit und ihre Folgen für KI-Tools sowie (4) kognitive Dissonanz und potenzielle Delegation menschlicher Entscheidungen an KI. Wir kommen zu dem Schluss, dass KI-Systeme menschliche medizinische Expertise nicht ersetzen, sondern eine unterstützende Rolle spielen sollten. Wir zeigen Regulierungsbedarf auf und geben Handlungsempfehlungen, um alle (menschlichen) Akteur:innen dabei zu unterstützen, sicher durch die unruhigen Gewässer dieses neuen Marktes zu navigieren. Potenzielle Dilemmata entstehen, wenn KI-Tools Diagnosen liefern, die im Widerspruch zur menschlichen medizinischen Expertise stehen. Diese Konflikte zu lösen, wird eine große Herausforderung sein.
Malignant melanoma is a skin tumour that accounts for 80 % of deaths from skin cancer (Saginala et al. 2021). Early detection is most important for the prognosis of the patients. The diagnostic procedure typically involves clinical history, visual inspection, and dermatoscopy (examination of a skin lesion by using an epiluminescence microscope). In cases where there is suspicion, the mole is surgically removed and sent for histopathological testing to a specialist laboratory to determine malignancy. This standardized procedure with clear diagnostic outcomes in suspicious cases is an ideal dataset for training and utilizing machine learning tools (so-called artificial intelligence, AI). AI in our understanding is a computational approach that uses knowledge gained from training cases to identify patterns and make predictions from input data. Specialized AI for dermatological image recognition has surpassed the performance in identifying skin cancer compared to human dermatologists (Esteva et al. 2017; Pham et al. 2021). In their review of 272 studies Jones et al. (2022) found solid performance of AI systems (89 % accuracy; 95 % Confidence Intervals: 60 %–100 %), which suggests that patients and clinicians can expect AI to be an asset for diagnosing melanoma. However, none of the studies included acceptance measures on the part of clinicians or patients demonstrating a limitation in the current approach towards technology assessment of AI in diagnostics. The controlled experimental findings should additionally be validated in medical practice to arrive at a realistic assessment of performance and practical fit. This is particularly relevant for a stringent technology assessment that evaluates AI in diagnostics in practical settings. While technology assessment studies for AI diagnostics exist (Schreier et al. 2020; Schwendicke et al. 2021), their underlying frameworks oftentimes neglect categories specific for AI technologies such as cybersecurity and explainability (Farah et al. 2023).
With this conceptual contribution we aim to shed light on four dimensions of AI employment in melanoma diagnosis, i. e., possible futures, social impacts, regulatory options, and ethical conundrums and agency constellations to inform technology assessment researchers about further areas for assessing AI systems in (melanoma) diagnostics along relative axes of analyses. We aim to provide insights for the following questions, framing our discussions within the German healthcare and regulatory system:
We conducted a narrative review of the literature on AI in melanoma diagnosis focusing on the aspects 1) scenarios, 2) social impacts, 3) regulatory options, and 4) ethical facets. Accordingly, we used the following search term to find articles in the databases PubMed and Google Scholar: [Artificial intelligence OR AI] AND [melanoma] AND [diagnos*] AND [[Scenario OR vision OR future] OR [social] OR [legal OR regulat*] OR [ethic*] OR [ELSI]]. We removed duplicates, screened the remaining articles, and selected the most fitting ones for the four topics based on our expertise. We also screened reference lists of the selected articles to find additional sources. For the scenarios, we conducted an initial brainstorming to develop scenarios as impulses for possible futures and then consulted further literature for details or contrasts.
AI diagnosis can be applied in different settings by different stakeholders. We will focus on the following three scenarios as they cover the outpatient and inpatient healthcare provision in Germany as well as self-administered care outside professional work: (1) second opinion for a dermatologist in outpatient care, (2) triage and prioritization within a dermatologist clinic, and (3) patient self-monitoring.
In this scenario, dermatologists upload pictures of suspicious moles to an AI database to obtain a second opinion. The system extracts relevant image features, compares them to a database of expert annotations, and generates a second opinion report highlighting potential areas of concern and providing a diagnosis. As such, the AI system complements the initial human assessment providing an additional layer of confidence reducing diagnostic errors. This scenario follows the interaction mode between clinicians and their machines described by Braun et al. (2021). Such a second opinion system might however transform over time: In a first step, time-restrained dermatologists realize that the AI tools provide (1) dissenting and potentially more reliable diagnoses (2) more efficiently than they can. These attributions could make the AI-based second opinion the first or only opinion. Consequently, dermatological expertise might be removed from the process and tasks regarding diagnosis delegated to non-dermatologists, e.g., to medical technical staff. As a result, dermatological hegemony in diagnosing melanoma is challenged. Companies offering AI tools might target general practitioners providing AI-based dermatological expertise. As such, a system initially providing a second opinion to specialist doctors ultimately might lead to spreading dermatological expertise across disciplines whilst removing specialist doctors from this task.
In this scenario, the AI system extends the process of prioritizing patients in a dermatology clinic. When patients arrive for skin examinations, images of their moles are captured and fed into the AI system. The algorithm compares the images to a database of annotated melanoma cases and provides a risk assessment score for each mole. Based on this score, the system prioritizes patients, flagging those with higher risks for immediate attention by dermatologists. The specialist doctors can then disregard those moles deemed benign by the AI focusing on the prioritized cases. This generates efficiency gains needed in a strained system: Already today, waiting times for outpatient dermatological appointments last 4.9 weeks with urban-rural variations (Krensel et al. 2015). Until 2035, the number of German regions underserved or without any dermatological specialists are expected to increase by 129 % and 700 %, respectively, in a forecast with moderate demographic changes (Kis et al. 2017).
In this scenario, medical laypeople regularly use an AI smartphone app to monitor their moles instead of screening in outpatient dermatological care. Instructed by the app, users capture standardized images of their moles and upload them to a database. The app compares the images to a database of annotated melanoma cases and provides a risk assessment for each mole together with recommendations for further action. Such a scenario follows the interaction mode between patients and machines (Braun et al. 2021). AI-supported self-monitoring empowers users to participate actively in their skin health and facilitates early detection of melanomas, potentially leading to timely medical intervention and improved health outcomes. However, self-empowerment oftentimes coincides with personal responsibility that necessitates the willingness of patients and their acceptance of new technologies. With 9.6 annual consultations per capita, Germany has the second most doctor consultations in the EU (OECD 2023). Thus, the German healthcare system relies heavily on the trust relationship between patients and doctors. Shifting health responsibility from the patient-doctor dyad towards the patient-machine interaction might encounter barriers of acceptance.
Through AI, the ideal of the ‘informed patient’ might actualize with increased adherence and responsibility for the patient’s own health.
The three possible futures answer in varying constellations current debates in healthcare provision characterized by resource scarcity. AI tools potentially provide efficiency gains by automating processes – diagnosing time-consuming difficult cases (scenario 1), prioritizing patients (scenario 2), or providing initial appraisal for patients (scenario 3).
Three conditions seem to be necessary for successful implementation. First, all scenarios assume acceptance for AI tools from patients and doctors alike. Second, AI tools need to perform reliably with sufficient specificity and sensitivity. Third, regulatory assurance must be provided to enable the use and billing of AI tools as medical services. Regulatory guidance would also be the basis for assessing harm caused by the tools’ appraisal systems, i. e., false positive or false negative diagnoses or prioritizing the wrong patients. Finally, it is important to acknowledge that imaging represents only one facet of melanoma differential diagnosis next to anamnestic conversation about progression of the mole’s appearance, itching, and bleeding.
Social impacts of AI technology in melanoma diagnosis vary between individual and societal perceptions of the medical profession. Both competencies of dermatologists and the framework of evidence-based medicine are being scrutinized. Patients might be – or at least feel – empowered and informed about their health. Those impacts have consequences for the configuration of the patient-doctor relationship.
Technology has played a crucial role in diagnosing medical conditions. X-rays, magnetic resonance imaging, and electroencephalography all offer literal insights into the human body and aid in diagnostic procedures across medical disciplines such as orthopedics, neurology, or oncology. These technologies are primarily used as tools expanding the diagnostic repertoire of medical professionals. Within this history, AI serves as a continuation of established procedures integrating technology into diagnostics.
However, the distinction between AI systems and other technologies lies in the attribution of expertise. Traditional imaging techniques do not offer diagnoses directly, but provide data to be interpreted by a human expert. AI instead offers its own diagnosis usually accompanied by a reliability or confidence score. Yet, human actors – medical professionals and IT experts alike – are usually unable to retrace the AI analyses, as the weights of the different nodes or the dataset used for training the model are mostly unavailable. This necessitates trust in the outcome and the accuracy of the reliability score. As in Ellul’s self-perpetuating, efficiency-driven technological society, dermatologists will “be confined to the role of a recording device; [they] will note the effects of techniques upon one another, and register the results” (Ellul 1964, p. 93). As in scenario 1, this trajectory does not displace human actors from the process of diagnosis, but it shifts the competencies needed – “[h]uman beings are, indeed, always necessary. But literally anyone can do the job, provided he is trained to it. Henceforth, men will be able to act only in virtue of their commonest and lowest nature, and not in virtue of what they possess of superiority and individuality“ (Ellul 1964, pp. 92–93).
The inherent complexity of AI tools contributes to their intrigue, as they evoke the notion of fortune telling, a topos deeply ingrained in human imaginaries. Examples of entities with predictive capabilities include the ancient oracles in Delphi or Cichyrus, the prophecy about the chosen one in the Harry Potter series, or the clairvoyant ‘precogs’ in The Minority Report. An AI system capable of providing believable predictions bears similarities to a technical version of these transcendent revelations from mythological narratives. Returning to such narratives stands in contrast to evidence-based medicine that demonstrated its effectiveness by achieving better health outcomes (e.g., taming deadly diseases or raising life expectancy) in an understandable, reproducible way. As such, medical professions are faced with a difficult task to reconcile their success using reproducible, experimental methods with novel technologies outperforming humans in certain confined tasks using inscrutable computational methods. In the light of the success rates in image recognition, individual dermatologists understandably start to question their own expertise.
In this crucial moment regulators should align themselves with these developments and shape the legal landscape concerning safe and effective AI technology.
Where medical professionals might struggle with shifts in competencies, patients might welcome such a transformation. With the introduction of AI tools, the scarce resource of specialist medical expertise becomes omnipresent in their pockets. Scenario 3 mentioned above demonstrates how AI tools might enhance patients’ perceived self-efficacy, health literacy, and health outcomes.
However, the success of such a transformation depends on both the system’s accuracy and the users’ expectations. Sensitivity and specificity as indicators for accuracy need to reach high thresholds, and patients’ performance and effort expectancies must be met for AI tools to unfurl their potential (Venkatesh 2022). Technical and user mistakes can create an erroneous impression of safety to the detriment of the patient. In that sense, a wrongly applied AI tool resembles an FFP-2 mask covering the mouth but not the nose.
With the advent of AI systems, potentially life-threatening diagnoses are presented to patients with little contextualization using incomprehensible methods. Unsettled patients then consult doctors who are tasked with managing information whose origin they cannot reproduce or comprehend. Efforts to make AI analyses explainable could lead to more transparency and understanding for human patients and doctors alike (WHO 2021). Currently, however, opaque analytical processes prevail in AI systems. Findings about the patient-doctor relationship illustrate that healthcare provision is more than the communication of facts (Ridd et al. 2009). Rather, reciprocal trust and empathic communication are relevant vehicles to generate better health outcomes for the patient (Chandra et al. 2018). Indeed, analyses of text responses from doctors compared to AI text generation in online forums indicate better quality and more empathy in the AI responses (Ayers et al. 2023). However, human-human relationships with regard, trust, and empathy might be preferable depending on the cultural context.
As a social impact, competencies potentially shift in several directions. The competency of diagnosing melanoma based on images might shift from dermatologists to technical staff equipped with AI tools. This process frees dermatologists’ resources for other aspects of differential diagnosis where further AI tools might be utilized (see scenarios 1 and 2 above) or lays off abundant dermatologists. AI could thus extend or replace human dermatological expertise. On another dimension, patients might enhance their health literacy using AI systems (Au et al. 2023). Literate patients encounter medical professionals at eye level thereby actualizing the ideal of an informed patient. However, transparency and explainable systems are necessary (Bjerring and Busch 2021). Otherwise, the system becomes a threat towards the ideal of human-centered reproducible and understandable science as well as patient-centered care (Bjerring and Busch 2021).
In current regulatory practice, the two principles of harmlessness and effectiveness are used to assess the impact of novel interventions, e.g., in form of devices or medication. Regulations on medical devices (e.g., the EU Regulation 2017/745, or the German Medizinproduktegesetz) generally require producers to demonstrate the harmlessness of their products for patients or in case of expected harm (e.g., in radiation therapy) a risk mitigation and reduction strategy. In contrast, licensing for medication follows the framework of effectiveness in a series of medical studies determining a safe clinical dose (phase I), assessing side effects and efficacy (phase II), and ultimately demonstrating effectiveness (phase III) (Müllner 2005). Substances not demonstrating effectiveness can still be marketed, but under different legal frameworks as cosmetics or foods, not as medications.
With diverse pathways to choose, it is not surprising that AI companies pursue different legal strategies. Some AI tools for dermatological diagnosis already underwent the medical devices path of demonstrating harmlessness (e.g., A.S.S.I.S.T. (OnlineDoctor 2022)). Others are marketing their products as providing simple non-medical services “not intended to perform diagnosis, but rather to provide users the ability to image, track, and better understand their moles” (AI Dermatologist 2023). These are strategies rather common in emerging markets. However, regulators should be aware of potential impacts applying the principles of harmlessness vs. effectiveness. When assessing AI performance, established parameters such as sensitivity, specificity, and precision should be complemented by a critical appraisal of biases and risks of the respective learning cycles and databases (Wehkamp et al. 2023). In this crucial moment regulators should align themselves with these developments and shape the legal landscape concerning safe and effective AI technology.
Besides legal regulation, medical guidelines (Leitlinien) systematically synthesize current knowledge based on clinical evidence. Balancing harmlessness and effectiveness, they give recommendations for action to medical practitioners without being legally binding. AI tools are currently not part of medical guidelines. However, given promising experimental results, guideline developers soon need to adopt a stance on AI tools. Here, critical assessment of the evidence is an important first step for including or excluding AI tools from recommendations. Successful RCT studies in image recognition should be validated in medical practice to arrive at a realistic assessment of performance and practical fit. With AI tools discussed in medical guidelines, clinicians will have more guidance to include or willfully exclude them in their practice. Recommending AI in medical guidelines would also entail clinical malpractice not to use the tools unless the patient agrees with below standard care (Thissen 2021). After all, responsibility for medical interventions lies with the human doctor and the informed patient as scenarios 1 and 3 show.
Beneficence and non-maleficence, autonomy, fairness, and responsibility are among the guiding ethical principles discussed in healthcare provision (Beauchamp and Childress 2001). Complying with these principles is paramount for AI systems to integrate well into the healthcare system. For instance, enhancing autonomy means that patients should be given a choice to agree or disagree with the use of AI systems in their diagnosis procedure without negative consequences, i. e., higher health insurance premiums that would shift the burden of responsibility solely towards the patients and challenge the solidarity principle in health insurance (Böning et al. 2019). Fairness in this context would mean equal access to enhanced diagnostic procedures such as AI (WHO 2021). In the following, we will focus on responsibility, particularly regarding doctors and their decisions since diagnosing is primarily a task for medical professionals.
Legally and ethically, doctors are responsible for their medical decisions, and they are held accountable for malpractice and negligence. This strong belief in assuming responsibility stands in contrast with the opaque and thus fascinating nature of AI systems outlined above. Conflicts arise when an AI system provides a different interpretation compared to the dermatologist. With responsibility clearly being attributed to the human actor, the dermatologist is faced with the difficult task to reconcile their own beliefs with discordant input from the AI system. Figure 1 shows a contingency table with the dermatologist’s and the AI’s diagnosis expanding the ethical discussion by Tupasela and Di Nucci (2020) with a temporal dimension.
Human dermatologist |
|||
---|---|---|---|
Diagnosis |
Melanoma |
Harmless mole |
|
AI system |
Melanoma |
Concordance melanoma |
Dissent Initially: melanoma Later: melanoma |
Harmless mole |
Dissent Initially: melanoma Later: harmless mole |
Concordance harmless mole |
The concordant cases are straightforward. In dissenting cases, the serious diagnosis melanoma is likely dominant and guiding in the first instance, irrespective of the information source. The responsible dermatologist will likely escalate the diagnostic process ‘to be on the safe side’ and excise the suspicious skin lesion. This leads to a general increase in operations and a consequent influx in associated healthcare costs in a field where already only 1 in 10 operations identifies a case of disease (Petty et al. 2020). Given a learning curve with the AI system assumed to be even slightly more accurate than the dermatologist, the AI system’s appraisal over time becomes the dominant assessment irrespective of the diagnosis. In that case, the dermatologist’s competence is depreciated, they are “confined to the role of a recording device” (Ellul 1964, p. 93), however, whilst assuming responsibility, liability, and accountability for the decision. Ethical conundrums arise challenging the agency of the dermatologists who do not fully understand the AI ‘decision-making’ process. Explainability and critical reflection of AI tools are necessary for the dermatologists to experience self-efficacy and interpret the results adequately (Bjerring and Busch 2021). This is particularly the case, as many profit-oriented companies might sense a chance in this emerging market by conveying the appearance of a functioning and accurate system.
We conclude that AI tools for diagnosing melanoma potentially provide convincing benefits for the healthcare system in terms of efficiency gains, more adequate resource allocation, health literacy and empowerment for patients, or more accurate diagnoses and better health outcomes. Promising experimental results must be validated in clinical practice. Three scenarios demonstrated applications of AI tools for diagnosing melanoma in different settings for different purposes.
However, the following necessary conditions must be fulfilled: AI tools must perform reliably with sufficient specificity and sensitivity; they must be transparent regarding the analytical processes and outcomes; and they must be accepted by patients, doctors, and other stakeholders. Moreover, AI tools need to adhere to ethical values of beneficence, non-maleficence, autonomy, fairness, and responsibility to protect dignity of all human actors involved. That includes AI tools being a support rather than replacement for human actors. Even considering all aspects above, cognitive dissonance in decision-making and competency shifts for dermatologists have to be expected particularly when the AI systems demonstrate superiority.
Based on these analyses we suggest technology assessment studies for AI in diagnostics to analyze the application contexts and consequences for the multiple stakeholders involved. Experimental studies focusing on performance should be complemented by observation studies in realistic settings of clinical practice. Regarding regulation, a nuanced debate about the underlying frameworks and an analysis of their consequences for accepting AI in diagnostics is needed.
Funding • This work received no external funding.
Competing interests • The authors declare no competing interests.
AI Dermatologist (2023): AI Dermatologist Skin Scanner. Available online at https://ai-derm.com/, last accessed on 03. 01. 2024.
Au, Jessica; Falloon, Caitlin; Ravi, Ayngaran; Ha, Phil; Le, Suong (2023): A beta-prototype chatbot for increasing health literacy of patients with decompensated cirrhosis. Usability study. In: JMIR Human Factors 10 (1), p. e42506. https://doi.org/10.2196/42506
Ayers, John et al. (2023): Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. In: JAMA internal medicine 183 (6), pp. 589–596. https://doi.org/10.1001/jamainternmed.2023.1838
Beauchamp, Tom; Childress, James (2001): Principles of biomedical ethics. Oxford: Oxford University Press.
Bjerring, Jens; Busch, Jacob (2021): Artificial intelligence and patient-centered decision-making. In: Philosophy & Technology 34, pp. 349–371. https://doi.org/10.1007/s13347-019-00391-6
Böning, Sarah-Lena; Maier-Rigaud, Remi; Micken, Simon (2019): Gefährdet die Nutzung von Gesundheits-Apps und Wearables die solidarische Krankenversicherung? Bonn: Friedrich Ebert Stiftung. Available online at https://library.fes.de/pdf-files/wiso/15883.pdf, last accessed on 03. 01. 2024.
Braun, Matthias; Hummel, Patrik; Beck, Susanne; Dabrock, Peter (2021): Primer on an ethics of AI-based decision support systems in the clinic. In: Journal of Medical Ethics 47 (12), p. e3 https://doi.org/10.1136/medethics-2019-105860
Chandra, Swastika; Mohammadnezhad, Masoud; Ward, Paul (2018): Trust and communication in a doctor-patient relationship. A literature review. In: Journal of Healthcare Communications 3 (3), pp. 1–6. https://doi.org/10.4172/2472-1654.100146
Ellul, Jacques (1964): The technological society. New York, NY: Knopf.
Esteva, Andre et al. (2017): Dermatologist-level classification of skin cancer with deep neural networks. In: Nature 542 (7639), pp. 115–118. https://doi.org/10.1038/nature21056
Farah, Line; Davaze-Schneider, Julie; Martin, Tess; Nguyen, Pierre; Borget, Isabelle; Martelli, Nicolas (2023): Are current clinical studies on artificial intelligence-based medical devices comprehensive enough to support a full health technology assessment? A systematic review. In: Artificial Intelligence in Medicine 140, p. 102547. https://doi.org/10.1016/j.artmed.2023.102547
Jones, Owain et al. (2022): Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings. A systematic review. In: The Lancet Digital Health 4 (6), pp. e466–e476. https://doi.org/10.1016/S2589-7500(22)00023-1
Kis, Anne; Augustin, Matthias; Augustin, Jobst (2017): Regionale fachärztliche Versorgung und demographischer Wandel in Deutschland – Szenarien zur dermatologischen Versorgung im Jahr 2035. In: Journal der Deutschen Dermatologischen Gesellschaft 15 (12), pp. 1199–1210. https://doi.org/10.1111/ddg.13379_g
Krensel, Magdalene; Augustin, Matthias; Rosenbach, Thomas; Reusch, Michael (2015): Wartezeiten und Behandlungsorganisation in der Hautarztpraxis. In: Journal der Deutschen Dermatologischen Gesellschaft 13 (8), pp. 812–814. https://doi.org/10.1111/ddg.80_12625
Müllner, Marcus (2005): Die Zulassung von Medikamenten (und anderen Medizinprodukten). Good Clinical Practice. In: Marcus Müllner: Erfolgreich wissenschaftlich arbeiten in der Klinik. Evidence Based Medicine. Wien: Springer, pp. 243–249. https://doi.org/10.1007/3-211-27476-6_33
OECD – Organization for Economic Co-operation and Development (2023): Doctors’ consultations (indicator). https://doi.org/10.1787/173dcf26-en
OnlineDoctor (2022): OnlineDoctor übernimmt KI-Startup A.S.S.I.S.T. Available online at https://www.onlinedoctor.de/pressemitteilung/onlinedoctor-uebernahme-ki-startup/, last accessed on 03. 01. 2024.
Petty, Amy et al. (2020): Meta-analysis of number needed to treat for diagnosis of melanoma by clinical setting. In: Journal of the American Academy of Dermatology 82 (5), pp. 1158–1165. https://doi.org/10.1016/j.jaad.2019.12.063
Pham, Tri-Cong; Luong, Chi-Mai; Hoang, Van-Dung; Doucet, Antoine (2021): AI outperformed every dermatologist in dermoscopic melanoma diagnosis, using an optimized deep-CNN architecture with custom mini-batch logic and loss function. In: Scientific Reports 11 (1), p. 17485. https://doi.org/10.1038/s41598-021-96707-8
Ridd, Matthew; Shaw, Alison; Lewis, Glyn; Salisbury, Chris (2009): The patient–doctor relationship. A synthesis of the qualitative literature on patients’ perspectives. In: British Journal of General Practice 59 (561), pp. e116–e133. https://doi.org/10.3399/bjgp09X420248
Saginala, Kalyan; Barsouk, Adam; Aluru, John; Rawla, Prashanth; Barsouk, Alexander (2021): Epidemiology of melanoma. In: Medical Sciences 9 (4), p. 63. https://doi.org/10.3390/medsci9040063
Schreier, Jan; Genghi, Angelo; Laaksonen, Hannu; Morgas, Tomasz; Haas, Benjamin (2020): Clinical evaluation of a full-image deep segmentation algorithm for the male pelvis on cone-beam CT and CT. In: Radiotherapy and Oncology 145, pp. 1–6. https://doi.org/10.1016/j.radonc.2019.11.021
Schwendicke, Falk et al. (2021): Cost-effectiveness of artificial intelligence for proximal caries detection. In: Journal of Dental Research 100 (4), pp. 369–376. https://doi.org/10.1177/0022034520972335
Thissen, Christina (2021): KI auf dem Weg zum Facharztstandard – nicht ohne Haftungsprophylaxe. In: Radiologen WirtschaftsForum 12, pp. 7–8. Available online at https://www.rwf-online.de/system/files/RWF_12_2021.pdf, last accessed on 03. 01. 2024.
Tupasela, Aaro; Di Nucci, Ezio (2020): Concordance as evidence in the Watson for Oncology decision-support system. In: Ai & Society 35, pp. 811–818. https://doi.org/10.1007/s00146-020-00945-9
Venkatesh, Viswanath (2022): Adoption and use of AI tools. A research agenda grounded in UTAUT. In: Annals of Operations Research 308 (1), pp. 641–652. https://doi.org/10.1007/s10479-020-03918-9
Wehkamp, Kai; Krawczak, Michael; Schreiber, Stefan (2023): The quality and utility of artificial intelligence in patient care. In: Deutsches Ärzteblatt International 120, pp. 463–469. https://doi.org/10.3238/arztebl.m2023.0124
WHO – World Health Organization (2021): Ethics and governance of artificial intelligence for health. WHO guidance. Geneva: World Health Organization. Available online at https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf, last accessed on 03. 01. 2024.