Artificial intelligence in laboratory medicine: lights and shadows
Honorary Professor of Clinical Biochemistry and Clinical Molecular Biology, University of Padova-Italy
Adjunct Professor, Department of Pathology, University of Texas, Medical Branch, Galveston-USA
When I was asked to prepare an editorial commenting the nice paper by Negrini and Lippi (1), the first reaction was “Oh my God, there is no day, no scientific meeting, no newspaper and/or scientific journal which may miss the opportunity to discuss this issue; this is boring, for sure”. However, both the careful lecture of the paper and further consideration persuaded me to accept the invitation.
Artificial intelligence (AI) is transforming the world we live in. The recent advances in AI and the use of ChatGPT (openAI, San Francisco, CA, USA) are revolutionizing industries ranging from finance to transportation system to healthcare, including laboratory medicine. The promise of AI is undeniable even if the hype and fear surrounding this subject are greater than that accompanying the discovery of the structure of the DNA or the whole human genome (2). As a matter of fact, we survived to the discovery of the DNA structure and even to the whole human genome by using human intelligence and appropriately adopting these advancements for improving medical practice. For sure, AI tools that recognize imagines more accurately and consistently than humans are exciting advances in medicine, particularly in some diagnostic specialties such as radiology and more generally in digital pathology. For example, radiomics, the automatic extraction of quantitative features from medical images, is a relatively new analysis method in the field of radiology. Radiomics is usually combined with machine learning (ML) to exploit their ability in managing massive amounts of data, usually for classification of diseases or cancers in terms of histopathology, genomics, prognosis, or response to treatment (3). In particular, the cluster consisting of image segmentation, image reconstruction, and feature extraction has a “motor theme” which corresponds to highly developed and community-relevant topics. From another side, a key factor that has catalyzed the interest in AI applications in pathology is the transition to digital pathology, where whole-slide imaging (WSI) is gradually replacing glass slides and microscopes. In this context, a key value proposition is that WSI provide the substrate for computational data analysis approaches based on ML/AI (4). Given the large amounts of data required for constructing of AI models, the development of AI models for digital pathology requires access to large WSI collections, or even assembling collections by pooling data from different sources. However, the privacy risks related to WSI sharing have been recently highlighted, and recommendations for releasing WSI data “as de facto anonymous data or as pseudonymized data”, based on the concepts of data protection in European General Data Protection Regulation (GDPR) framework have been described (5).
A growing interest has been reported in the application of AI in laboratory medicine: after decades in which the focus on automation and improvement in the analytical phase, AI offers the opportunity to effectively improve the quality mainly in pre- and post-analytical phases. In fact, laboratories generate daily, not only test results (analytical phase), but also data from preanalytical and post-analytical phases, which are continuously generated at all stages of the “brain-to-brain” loop, data that are only partially captured in the Laboratory Information System (LIS). AI offers the opportunity to improve request appropriateness, avoiding useless and costly requests, as well as the appropriate interpretation and utilization of laboratory results, leading to move data into more actionable information. However, first and foremost, clinical laboratories must ensure that laboratory data are accurate and reliable to avoid the risk of sophisticated systems such as ML and AI using inaccurate results which in turn can lead to inaccurate and potentially harmful information. In addition, answering the question whether laboratory specialists are ready for the digital transformation remains. We emphasized that the potential application of ML models to laboratory data could be relevant, but “to manage the change and uncover additional benefits to patient care, there is an urgent need to adapt expertise within laboratories and to improve the cooperation between laboratories and AI experts” (6). Negrini and Lippi have focused their article on the most “famous” model of generative AI that is ChatGPT, and on the issue of ChatGPT and ChatGPT-like AIs in scientific literature. First of all, a wide consensus has been achieved on the evidence that generative AI models cannot be considered “authors” as they lack of direct responsibility on the publication (7). AI models, in fact, are not accountable and do not fulfill the widely adopted authorship criteria. In addition, scientific Journals have to manage new challenges such as the need to request the authors to acknowledge the use of AI models in preparing articles, figures, and performing the statistical treatment of the data. This is mandatory to make the editors and referees able to accurately check for the quality of the scientific content and, therefore, new expertise is needed to better appreciate the quality of algorithms and finally to appropriately evaluate the scientific content of the papers. From another side, AI models offer the opportunity to improve the identification of plagiarism, duplicated submission and publication.
In a Letter to the Editor titled “ChatGPT-mania in Medicina di Laboratorio? Non è oro tutto quello che luccica” (8), the same authors are commenting a recently published paper by Cadamuro et al. (9) on the accuracy of laboratory results interpretation by ChatGPT. As I already wrote an Editorial commenting the paper, I simply invite interested people to read it (10) but, in summary, my thinking is that “if you ask a foolish question, you should receive only a foolish answer”. AI models as well as humans are allowed to finalize the right interpretation of laboratory results only if they are integrated by other essential information such as the right comparators (e.g. decision limits, when available) and clinical data.
Therefore, the intriguing title of the paper “friend or foe” (1) should request much more concern, but first and foremost the take home message is to use the human brain to evaluate and validate the data provided by AI tools.
1. Negrini D, Lippi G. Generative artificial intelligence in (laboratory) medicine: friend or foe? Biochim Clin 2023;47:xx-xx.
2. Israni ST, Verghese A. Humanizing Artificial Intelligence. JAMA 2019;321:29-30.
3. Kocak B, Baessler B, Cuocolo R, Mercaldo N, Pinto Dos Santos D. Trends and statistics of artificial intelligence and radiomics research in Radiology, Nuclear Medicine, and Medical Imaging: bibliometric analysis. Eur Radiol 2023 doi: 10.1007/s00330-023-09772-0 (aop).
4. Dehkharghanian T, Mu Y, Tizhoosh HR, Campbell CJV. Applied machine learning in hematopathology. Int J Lab Hematol 2023;45 Suppl 2:87-94.
5. Holub P, Müller H, Bíl T, Pireddu L, Plass M, Prasser F, et al. Privacy risks of whole-slide image sharing in digital pathology. Nat Commun 2023;14:2577.
6. Padoan A, Plebani M. Artificial intelligence: is it the right time for clinical laboratories? Clin Chem Lab Med 2022;60:1859-61.
7. Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “Authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 2023;329:637-9.
8. Negrini D, Lippi G. “ChatGPT-mania in Medicina di Laboratorio? Non è oro tutto quello che luccica” Biochim Clin 2023;47:xx-xx
9. Cadamuro J, Cabitza F, Debeljak Z, De Bruyne S, Frans G, Perez SM, et al. Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI). Clin Chem Lab Med. 2023;61:1158-66.
10. Plebani M. ChatGPT: Angel or Demond? Critical thinking is still needed. Clin Chem Lab Med 2023;61:1131-2.