Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Thinking outside the black box: CardioPulse takes a look at some of the issues raised by machine learning and artificial intelligence

Thinking outside the black box: CardioPulse takes a look at some of the issues raised by machine... Günter Breithardt, MD, former head of cardiovascular medicine at University Hospital Münster, Germany, looks into the viabilities and liabilities of artificial intelligence (AI) from a clinical perspective. Nils Hoppe—professor of ethics and law in the life sciences at Leibniz Universität Hannover in Germany—answers questions about the legal implications of AI. For Prof. Breithardt (Figure 1), the autonomous car is a good stepping off point to explore some of the issues that AI throws up across its spectrum of use. ‘An autonomous car is initially programmed by engineers and IT people in a deterministic way. The process of decision making is complex due to the huge amount of information from multiple sensors and imaging modalities, but the behaviour of such cars is still rule-based. If it were “true” AI, there should be one more step integrating a self-learning process which uses new data obtained during driving with good and bad experiences, the latter being accidents. This is not feasible and of course autonomous driving does not represent “true” AI, but a high and sophisticated level of what has been called “symbolic” AI. If autonomous cars were able to learn based on new data acquired during driving, it would change from a deterministic, rule-based entity into something that makes its own decisions with an identity of its own.’ Prof. Breithardt believes that ‘true’ AI represents a shift from a knowable and understandable piece of machinery towards a black box scenario where the evaluation process is more or less hidden and does not use the familiar algorithms of statistics. The decision-making process becomes more remote from the original design by continuously learning from data and making choices (sometimes unforeseen) thereby adopting its own responsibility. Figure 1 Open in new tabDownload slide Günter Breithardt, MD. The use of AI in clinical medicine has produced some striking outcomes. Machine learning can be used to diagnose congenital heart disease, to interpret images, or analyse ventricular function. Prof. Breithardt points to ‘amazing’ work from the Mayo Clinic in the United States, where convolutional neural networks were established to interpret electrocardiograms during sinus rhythm which were able to identify a propensity to atrial fibrillation. The AI algorithm was able to say whether a patient had experienced atrial fibrillation previously and whether it would occur later. ‘Although one might get an idea of the underlying evaluative process which searches for patterns in the data, it remains obscure for us as clinicians how the machine does this.’ This raises the question of who is responsible for the results of such an AI-based decision process, the designer of the original version, or the user who provides more data to the algorithm over time? Prof. Breithardt says: ‘If I feed a system with data from my hospital and my own interpretations and the system then adapts that over the years, is the original designer still responsible? What about the institution or the physician(s) using it. It’s no longer what it was, and for me there’s a big difference between rule-based systems where we have a legal liability and the sort of evaluation done by the system itself. Can the AI system be made responsible for any harm or damage to the patient like a non-natural entity which can be covered by liability insurance? These are important challenges for the future.’ According to Prof. Breithardt, there are plenty of precedents from the past which show that clinicians have adopted new therapies and technologies without understanding fully how they work but knowing that they work. He says: ‘I think there are some similarities today with what we were doing in our early training. Even though we studied how to test for diseases in the labs, the tests became so complicated that we had to rely on specialists to do them. So, we have to trust the results without fully understanding the process involved but we also have to adopt such results into our clinical decision-making process.’ Going forward, he says, it will be the responsibility of everyone—senior and junior—to educate or train themselves to understand the strengths and limitations of AI systems, but the role of the physician often built on years of experience and research must remain. ‘We as doctors should not give away our decision making based on our knowledge and understanding of diseases by saying, “well, the computer will tell us”, or “it’s a problem of the black box system”. We know that there is a whole spectrum of patients who are very concerned about the use of AI so it’s important as doctors that we follow how AI develops over time and don’t allow legitimate concerns to grow into big problems.’ Nils Hoppe (Figure 2), who is also a partner at solicitors Lawford Davies & Co London, UK, answers questions about the legal implications of AI. Figure 2 Open in new tabDownload slide Nils Hoppe. How does the law define AI? Initially, it’s rather boring I would say, but the law would approach it with the tools it already has—in the same way that defines other items of software. It’s a product that carries a high risk in relation to an individual’s privacy and health as well as clinical decision-making, but it’s not initially normatively different to the sort of complex software which might be designed for running a nuclear power station for example. What makes AI different has to do with questions of opacity, what does it actually do, and how does it reach decisions? It’s difficult because you can’t look into a black box, and because AI systems have so much data available to them, it can be difficult to understand how it reaches particular conclusions and hard for a human user to second guess the conclusions. What does this mean in practice? Machine learning technologies have been used in medical imaging for a long time and can, for example, be much quicker at identifying different kinds of tumours than humans. At the same time, you might have an oncologist or a pathologist who has been doing this for 30 or 40 years and has the benefit of direct human contact with patients, unlike the algorithm. The risk is that this vital part of medicine is lost because the human uncritically accepts the advice of the algorithm because a lot of data has gone into the machine and it’s usually right. The doctor may also accept it because the organization they are working in has provided the AI tool and requires them to use it. What we might see in a future scenario is the physician giving away responsibility to an entity that doesn’t exist as a moral agent, and this will likely be where the law will struggle with questions of liability, responsibility, and accountability. Where are we now with the law? There is often this assumption that technology develops so fast, and innovation is so dynamic that the law lags behind or there is no law there. That’s a misunderstanding: there’s law everywhere, and there is nothing that we can invent that isn’t encompassed by some sort of law. The question is: does the law work well within that context? In terms of AI, the law might simply say that you have produced an algorithm like any other kind of software, and you are liable for what it does. The fact that you don’t know how it does it is your problem because you have created it, brought it onto the market, and exposed patients to it, so you are liable for it. The question that people often have at the back of their minds is: do we need a specialized AI law? The EU has done a lot of work on developing frameworks for governing AI partly because there has been a realization that we have an issue interpreting the existing law in relation to AI applications. Who is responsible for AI? An AI powered app that processes available data and comes out with a recommendation for a surgeon, a cardiologist, or for me personally as a patient likely amounts to a medical device and the responsibility lies with those who bring it to the consumer. What makes that interesting in legal terms is how the difficulty in explaining decision-making processes in AI applications may limit evidential aspects of liability. In other words, you can only be fully accountable if you know how decisions are being taken. For example, if I am being treated by a cardiologist, and there’s no AI in the picture, their decisions and course of action would be documented and permit scrutiny which means that we can establish accountability for any harm caused. With a black box AI application, you don’t know what data are used to come to its conclusion. This makes it harder to work out what—if anything—went wrong and who is responsible. The law would likely say if there is a harmful output and the manufacturer is unable to explain it, the liability lies with the manufacturer and inability to explain why a product harmed someone isn’t a good enough shield against liability. On the contrary, it may be an aggravating factor to have marketed a product, the effects of which you are unable to control or explain. What are the special considerations for using AI in a clinical setting? Clinicians may welcome an additional technology that gives them easy answers to complex questions, and they may accept the recommendation of an AI powered device. Will this process actually contribute to the elimination of their own decision-making and does that diminish clinical quality? If you want to frame it critically: AI should be a servant in a clinical decision-making process, but never the master. The risk is that the more prevalent it becomes in clinical decision-making, the more likely it is that doctors become the executors of AI decision-making in relation to patients. This is something that changes the physician–patient relationship in a way which is more impactful than we may have previously thought. What’s the best way to avoid risks with AI? Hospitals and clinical settings are statistically speaking dangerous places where more people are harmed than in any other setting. Almost all technologies in a clinical setting are risky, and AI is simply another such technology. To understand the risks better, we need to subject AI decision-making in a clinical setting to robust risk-benefit analyses in much the same way as for example, innovative robotic surgery. The patient would have to understand the inherent risks of the new technology and advantages of that procedure. In addition, organizations that want to deploy this type of technology also need to know its limitations and understand how AI devices currently use existing human knowledge which is flawed. Unless systems are alive to these flaws and eliminate them, it will simply make them part of the problem, rather than the solution, and they will perpetuate racist, sexist, and other biases. We need more conversations about how we can tackle those flaws in the original data. Is current scrutiny enough? There are all sorts of authorities providing a framework of data protection, data security, and scrutiny, and they won’t be blind to the nature of AI. What often happens is that there is a scandal or public outcry which forces the regulator’s hand. If we start rolling out AI technologies in a clinical setting, we should expect some things to go well and improve the quality of care, but things will sometimes go wrong, and we will have to think about how to prevent these things happening in the future. The law works to enable you to do as many things as possible, unfettered, and only regulates if the risk of the activity becomes unpalatable. The alternative would be to close the door on AI completely and only open it slightly when we are sure of the social effects—this is the constant dilemma of regulating innovation: we can either be very restrictive at the start or we can respond to a real harm that has occurred afterwards. Anything else you want to say? Medicine is a deeply human enterprise, and all these technologies must serve the human within the system. We must never frame it in a way that replaces inalienable human activity with AI technology. Author notes All correspondence relating to this article should be sent to cardiopulse@unicatt.it Conflict of interest: None declared. © The Author(s) 2023. Published by Oxford University Press on behalf of the European Society of Cardiology. All rights reserved. for permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png European Heart Journal Oxford University Press

Thinking outside the black box: CardioPulse takes a look at some of the issues raised by machine learning and artificial intelligence

European Heart Journal , Volume 44 (12): 3 – Jan 2, 2023

Loading next page...
 
/lp/oxford-university-press/thinking-outside-the-black-box-cardiopulse-takes-a-look-at-some-of-the-SqVTtzxU8j

References (0)

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Oxford University Press
Copyright
© The Author(s) 2023. Published by Oxford University Press on behalf of the European Society of Cardiology. All rights reserved. for permissions, please e-mail: journals.permissions@oup.com
ISSN
0195-668X
eISSN
1522-9645
DOI
10.1093/eurheartj/ehac790
Publisher site
See Article on Publisher Site

Abstract

Günter Breithardt, MD, former head of cardiovascular medicine at University Hospital Münster, Germany, looks into the viabilities and liabilities of artificial intelligence (AI) from a clinical perspective. Nils Hoppe—professor of ethics and law in the life sciences at Leibniz Universität Hannover in Germany—answers questions about the legal implications of AI. For Prof. Breithardt (Figure 1), the autonomous car is a good stepping off point to explore some of the issues that AI throws up across its spectrum of use. ‘An autonomous car is initially programmed by engineers and IT people in a deterministic way. The process of decision making is complex due to the huge amount of information from multiple sensors and imaging modalities, but the behaviour of such cars is still rule-based. If it were “true” AI, there should be one more step integrating a self-learning process which uses new data obtained during driving with good and bad experiences, the latter being accidents. This is not feasible and of course autonomous driving does not represent “true” AI, but a high and sophisticated level of what has been called “symbolic” AI. If autonomous cars were able to learn based on new data acquired during driving, it would change from a deterministic, rule-based entity into something that makes its own decisions with an identity of its own.’ Prof. Breithardt believes that ‘true’ AI represents a shift from a knowable and understandable piece of machinery towards a black box scenario where the evaluation process is more or less hidden and does not use the familiar algorithms of statistics. The decision-making process becomes more remote from the original design by continuously learning from data and making choices (sometimes unforeseen) thereby adopting its own responsibility. Figure 1 Open in new tabDownload slide Günter Breithardt, MD. The use of AI in clinical medicine has produced some striking outcomes. Machine learning can be used to diagnose congenital heart disease, to interpret images, or analyse ventricular function. Prof. Breithardt points to ‘amazing’ work from the Mayo Clinic in the United States, where convolutional neural networks were established to interpret electrocardiograms during sinus rhythm which were able to identify a propensity to atrial fibrillation. The AI algorithm was able to say whether a patient had experienced atrial fibrillation previously and whether it would occur later. ‘Although one might get an idea of the underlying evaluative process which searches for patterns in the data, it remains obscure for us as clinicians how the machine does this.’ This raises the question of who is responsible for the results of such an AI-based decision process, the designer of the original version, or the user who provides more data to the algorithm over time? Prof. Breithardt says: ‘If I feed a system with data from my hospital and my own interpretations and the system then adapts that over the years, is the original designer still responsible? What about the institution or the physician(s) using it. It’s no longer what it was, and for me there’s a big difference between rule-based systems where we have a legal liability and the sort of evaluation done by the system itself. Can the AI system be made responsible for any harm or damage to the patient like a non-natural entity which can be covered by liability insurance? These are important challenges for the future.’ According to Prof. Breithardt, there are plenty of precedents from the past which show that clinicians have adopted new therapies and technologies without understanding fully how they work but knowing that they work. He says: ‘I think there are some similarities today with what we were doing in our early training. Even though we studied how to test for diseases in the labs, the tests became so complicated that we had to rely on specialists to do them. So, we have to trust the results without fully understanding the process involved but we also have to adopt such results into our clinical decision-making process.’ Going forward, he says, it will be the responsibility of everyone—senior and junior—to educate or train themselves to understand the strengths and limitations of AI systems, but the role of the physician often built on years of experience and research must remain. ‘We as doctors should not give away our decision making based on our knowledge and understanding of diseases by saying, “well, the computer will tell us”, or “it’s a problem of the black box system”. We know that there is a whole spectrum of patients who are very concerned about the use of AI so it’s important as doctors that we follow how AI develops over time and don’t allow legitimate concerns to grow into big problems.’ Nils Hoppe (Figure 2), who is also a partner at solicitors Lawford Davies & Co London, UK, answers questions about the legal implications of AI. Figure 2 Open in new tabDownload slide Nils Hoppe. How does the law define AI? Initially, it’s rather boring I would say, but the law would approach it with the tools it already has—in the same way that defines other items of software. It’s a product that carries a high risk in relation to an individual’s privacy and health as well as clinical decision-making, but it’s not initially normatively different to the sort of complex software which might be designed for running a nuclear power station for example. What makes AI different has to do with questions of opacity, what does it actually do, and how does it reach decisions? It’s difficult because you can’t look into a black box, and because AI systems have so much data available to them, it can be difficult to understand how it reaches particular conclusions and hard for a human user to second guess the conclusions. What does this mean in practice? Machine learning technologies have been used in medical imaging for a long time and can, for example, be much quicker at identifying different kinds of tumours than humans. At the same time, you might have an oncologist or a pathologist who has been doing this for 30 or 40 years and has the benefit of direct human contact with patients, unlike the algorithm. The risk is that this vital part of medicine is lost because the human uncritically accepts the advice of the algorithm because a lot of data has gone into the machine and it’s usually right. The doctor may also accept it because the organization they are working in has provided the AI tool and requires them to use it. What we might see in a future scenario is the physician giving away responsibility to an entity that doesn’t exist as a moral agent, and this will likely be where the law will struggle with questions of liability, responsibility, and accountability. Where are we now with the law? There is often this assumption that technology develops so fast, and innovation is so dynamic that the law lags behind or there is no law there. That’s a misunderstanding: there’s law everywhere, and there is nothing that we can invent that isn’t encompassed by some sort of law. The question is: does the law work well within that context? In terms of AI, the law might simply say that you have produced an algorithm like any other kind of software, and you are liable for what it does. The fact that you don’t know how it does it is your problem because you have created it, brought it onto the market, and exposed patients to it, so you are liable for it. The question that people often have at the back of their minds is: do we need a specialized AI law? The EU has done a lot of work on developing frameworks for governing AI partly because there has been a realization that we have an issue interpreting the existing law in relation to AI applications. Who is responsible for AI? An AI powered app that processes available data and comes out with a recommendation for a surgeon, a cardiologist, or for me personally as a patient likely amounts to a medical device and the responsibility lies with those who bring it to the consumer. What makes that interesting in legal terms is how the difficulty in explaining decision-making processes in AI applications may limit evidential aspects of liability. In other words, you can only be fully accountable if you know how decisions are being taken. For example, if I am being treated by a cardiologist, and there’s no AI in the picture, their decisions and course of action would be documented and permit scrutiny which means that we can establish accountability for any harm caused. With a black box AI application, you don’t know what data are used to come to its conclusion. This makes it harder to work out what—if anything—went wrong and who is responsible. The law would likely say if there is a harmful output and the manufacturer is unable to explain it, the liability lies with the manufacturer and inability to explain why a product harmed someone isn’t a good enough shield against liability. On the contrary, it may be an aggravating factor to have marketed a product, the effects of which you are unable to control or explain. What are the special considerations for using AI in a clinical setting? Clinicians may welcome an additional technology that gives them easy answers to complex questions, and they may accept the recommendation of an AI powered device. Will this process actually contribute to the elimination of their own decision-making and does that diminish clinical quality? If you want to frame it critically: AI should be a servant in a clinical decision-making process, but never the master. The risk is that the more prevalent it becomes in clinical decision-making, the more likely it is that doctors become the executors of AI decision-making in relation to patients. This is something that changes the physician–patient relationship in a way which is more impactful than we may have previously thought. What’s the best way to avoid risks with AI? Hospitals and clinical settings are statistically speaking dangerous places where more people are harmed than in any other setting. Almost all technologies in a clinical setting are risky, and AI is simply another such technology. To understand the risks better, we need to subject AI decision-making in a clinical setting to robust risk-benefit analyses in much the same way as for example, innovative robotic surgery. The patient would have to understand the inherent risks of the new technology and advantages of that procedure. In addition, organizations that want to deploy this type of technology also need to know its limitations and understand how AI devices currently use existing human knowledge which is flawed. Unless systems are alive to these flaws and eliminate them, it will simply make them part of the problem, rather than the solution, and they will perpetuate racist, sexist, and other biases. We need more conversations about how we can tackle those flaws in the original data. Is current scrutiny enough? There are all sorts of authorities providing a framework of data protection, data security, and scrutiny, and they won’t be blind to the nature of AI. What often happens is that there is a scandal or public outcry which forces the regulator’s hand. If we start rolling out AI technologies in a clinical setting, we should expect some things to go well and improve the quality of care, but things will sometimes go wrong, and we will have to think about how to prevent these things happening in the future. The law works to enable you to do as many things as possible, unfettered, and only regulates if the risk of the activity becomes unpalatable. The alternative would be to close the door on AI completely and only open it slightly when we are sure of the social effects—this is the constant dilemma of regulating innovation: we can either be very restrictive at the start or we can respond to a real harm that has occurred afterwards. Anything else you want to say? Medicine is a deeply human enterprise, and all these technologies must serve the human within the system. We must never frame it in a way that replaces inalienable human activity with AI technology. Author notes All correspondence relating to this article should be sent to cardiopulse@unicatt.it Conflict of interest: None declared. © The Author(s) 2023. Published by Oxford University Press on behalf of the European Society of Cardiology. All rights reserved. for permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

European Heart JournalOxford University Press

Published: Jan 2, 2023

There are no references for this article.