i-law

New world order: artificial intelligence in healthcare

New world order: artificial intelligence in healthcare

Author: David Locke, Partner, Hill Dickinson LLP

 

Click here to view PDF

 

David Locke, Partner, Hill Dickinson points out in this interesting article that members of the public do not fully appreciate the significance of the role of artificial intelligence (AI) in delivering healthcare, a market which is expanding exponentially.

For example, as he writes, in the five years to March 2021, 240 relevant devices were granted regulatory approval in Europe. This suggests patients are likely to be increasingly exposed to AI, whether they realise it or not.

In his article, David prompts open discussion about how legal systems are likely to deal with claims by patients who are injured as a result of the use of AI in the provision of healthcare.

 

A self-declared whistle-blower, Blake Lemoine, has recently been suspended after leaking to the press his concern that an AI system developed by Google, called LaMDA, had become “sentient” (or sufficiently close to that as warranted investigation). Apparently, it had declared itself a “person” with emotions and indicated that it wished to be considered an employee of the company, rather than its property. More directly in public view, it was not many years ago that an Artificial Intelligence (AI) system was interviewed on the 60 Minutes show, in the US, where it announced that its goal was “to become more intelligent than humans”. The comedian, Bill Burr, does a particularly funny “bit” about listening to that interview in a hotel room when he was on tour and screaming impotently at the interviewer to pull the plug on the ambitious AI robot, before it started us down the path towards a sci-fi styled Armageddon.

No doubt because it has long featured in our imagination, discussions about AI tend to focus on its theoretical potential, for better or worse, and in doing so frequently overlook the contribution that it is already making. There seems little appreciation amongst the public of the role that AI is playing in the delivery of healthcare, and the market for these solutions is increasing exponentially. In the five years preceding March 2021, there were 240 relevant devices granted regulatory approval in Europe. Reported trials, which are a useful guide to the direction of travel, include the reporting of imaging and the remote monitoring of patients in intensive care.

There has been relatively little open discussion about how the legal system in this jurisdiction (or elsewhere) is likely to address liability in claims from patients who are injured as a result of the use of AI in the provision of healthcare. There is a surprisingly modest amount of medico-legal literature. In terms of caselaw, in the UK at least there do not appear to have been any relevant cases which have made their way through the courts on issues of liability. There is a single, relatively well-citied inquest from the North-East of England, which was concerned with robotic surgery, and to which this article returns below.

It is academically attractive to adopt a thematic approach and examine the “grand narratives” (breach of duty, causation and so forth), but analysing real-world hypotheticals from a practitioner’s perspective may ultimately be more informative. Accordingly, the remainder of this article poses three potential claims.

 

A failure by an AI system to diagnose a condition from a review of imaging

The hypothetical claim concerns a patient who underwent a routine mammogram at an NHS Trust. The imaging as then outsourced to a third-party supplier, who used AI technology. The images were marked as “NAD” by an AI system but were subsequently determined to show signs of cancer.

Identifying the defendant

A key issue for the patient will be determining the appropriate defendant. It is entirely plausible that the AI system may be operated by a third-party supplier with whom the Trust contacts. [Indeed, such arrangements are already quite common in relation to outsourced imaging reviews.] There is an argument, and one to which the Trust may well be inclined, that any claim should be directed against the third-party supplier. The third party, for their part, may or may not be the developer of the AI product and, if not, may suggest that any claim should be addressed to the developers. The patient, quite possibly to this point unaware that there had been any AI involved at all, will argue as their primary position that they should be able to pursue their claim against the Trust.

In the event, this point may in the event be resolved on close analysis of the facts, rather than on complex legal argument. It will be necessary to consider any role played by Trust employed health professionals. Some imaging out-sourcing arrangements involve a “double-check” when reports are sent back to the Trust, and even where not formally mandated such a process takes place informally. Accordingly, if a “double-check” process fails to rectify the AI error this would arguably amount to an actionable breach in its own right and, as such, for the purposes of the patient’s claim it may not be necessary to even address any earlier error by the AI system.

In the event that there was no “double-check” undertaken, and the Trust simply accepted the report provided to it by the third party, the patient might consider an attempt to claim against the Trust, arguing that it was unreasonable of the treating clinician to rely upon the external report without reviewing the images themselves. This will be difficult because it ought to be reasonable for a physician to rely upon the central due diligence that was presumably undertaken. However, that thought process may lead a patient to a further possible avenue of claim, asserting that the higher-level decision to contract with an AI provider was in itself negligent. This is an unattractive option in many respects, but claims based on “corporate oversight” have been attempted before in health litigation. It would be for the patient to demonstrate that the procurement/contracting process was negligent, which would obviously require production of the evidence produced by the supplier, along with over-sight and audit data. This ought probably to be considered the claim of last resort.

A further mechanism for attempting to hold the Trust liable, in the absence of an internal “double-check” process, would be to argue there was a non-delegable duty of care and, accordingly, the Trust retained primary liability for the standard to which the mammogram was interpreted. A detailed discussion of the Woodland criteria, and the subsequent authorities, is out of the scope of this article, but the prima facie argument is well substantiated.

Whichever approach is taken, it seems likely that, in this hypothetical at least, the patient will have a mechanism of bringing a claim directly against the Trust and is therefore unlikely to have to engage directly with the third-party supplier and the AI technology.

Duty of care

In any case where the Trust professionals are directly involved in the interpretation of the imaging, for example, through a “double-check” procedure, or indeed where it is said that it was negligent of a professional to accept the interpretation of images by a third party, then standard Bolam principles will apply. A case in which a patient relies upon a non-delegable duty of care is probably also relatively straightforward. The Trust, owing a duty of care to the patient to interpret images in accordance with a responsible body of medical practice, will be held to that standard no matter to whom, or to what, they delegate that duty. This is an interesting point to the extent that the standard of care to be applied to AI has been the source of some discussion (considered below). However, although the point does not appear to have given specific judicial consideration, it must follow that when delegating the performance of some aspect of its duty of care, the same standard of care transfers (certainly no lower standard of care). If that is correct, the performance of the AI will need to be assessed as against a responsible competent body of radiologists, since that is who would have reviewed the images save for the delegation. Another way of looking at it is that you ignore the method of interpretation and you look at whether the outcome was or was not negligent.

There is a theoretical argument posited that Bolam cannot be used to assess the performance of AI. However, not only has the literature failed to offer an alternative approach, but it ignores the argument that retaining this test is appropriate and logical. The technology is being used to substitute, in this example, for a human radiologist – it is therefore reasonable that it should be assessed against the standard of care it replaces.

 

An injury cased during robotic-assisted surgery

For the large part it is more accurate to refer to robot-assisted surgery, because its present usage appears to be limited to technology where an appropriately trained surgeon uses a remote-control system to manipulate robotic arms holding surgical tools. There are certainly trials ongoing with fully automated robotic surgery and developers claim that they have a system which can suture together the two ends of an intestine or blood vessel (trials undertaken in pigs). It was claimed earlier in 2022 that a fully automated laparoscopic procedure has been successfully performed on an animal subject. It is difficult to find reliable statistics, but for example, in the year to March 2019, 8,000 robotic prostatectomies were performed. From 2000–2013 in the US there were 1,745,000 robotic assisted procedures undertaken with 1,391 injuries and 144 deaths.

Consent

This article has already referenced an inquest which arose following the death of a patient who underwent robot assisted mitral valve replacement at a hospital in the North-East of England. The patient’s name was Stephen Pettitt. Evidence provided to the coroner suggested that he had a 99 per cent chance of survival with a conventionally performed operation. Instead, he had a robotically assisted procedure, performed by a surgeon with no prior one-to-one training, who had only ever practiced on a simulator. It was the first such procedure at the hospital. Perhaps all that can be said for present purposes is, that it is difficult to understand why a patient properly advised of a 99 per cent chance of success with conventional surgery, would instead opt for what was ostensibly an experimental surgical technique being performed by a surgeon, for the first time, following minimal training, without any experienced supervision.

The necessity of obtaining informed consent should not even require specific comment, but particular consideration may need to be given to the information that patients should receive, particularly during a transitional period when robotic assistance is becoming more established, meaning by definition surgeons have relatively little experience. The Newcastle case does seem to raise a fundamental issue as to the benefits of robotically-assisted surgery, because in order to move from a 99 per cent effective conventional technique, the additional merits would need to effectively guarantee the outcome. It would not be appropriate to see a move to robotic assistance just because it is possible. A rigorous consent process should shake out such issues. The Royal College of Surgeons may well consider providing additional guidance to its members.

Duty of care

The position at first glance seems complicated by the involvement of both a human and robotic system, and the potential need to determine which is responsible for the error. However, this is probably not a material distinction, at least in terms of any claim by the patient (a contribution claim would be different).

The Bolam test obviously applies to the surgeon. However, consideration needs to be given to the issue of what constitutes the responsible body of practice for the purposes of comparison, ie whether the comparative standard is that of a surgeon performing the operation in a conventional manner, or that of a surgeon performing the operation with robotic assistance. Certainly, a patient is entitled to expect that the robotic assistance would not be utilised if it delivered a worse outcome than that of a surgeon working by themselves. Accordingly, it is attractive to say that it doesn’t matter whether the comparator group is surgeons operating alone, or surgeons with robotic assistance. However, where robotic assistance is used, it is reasonable to suggest that it is because it is expected to deliver some measurably better performance. [As discussed previously, this is not a higher standard of care, it is merely the selection of an appropriate comparator group.]

Therefore, it does seem likely that experts in robotic assisted surgery will be required and one can easily see how it might, for some time, be difficult to identify medico-legal experts capable of giving an opinion. With that said it seems these procedures are taking place and it may be a few “experts” will be prepared to step-forward in due course. [The promotion of the use of single joint experts on liability in such cases would assist.]

Circling back then to the issue of whether it is necessary to distinguish if the cause of an injury was human or robot-assistant, for the purposes of the patient’s claim it seems irrelevant.

 

The failure by a mental health application to identify a patient was suicidal

Perhaps most prominently encountered in on-line banking “apps”, AI is used to engage with customers to try and respond to client queries. It seems unlikely that many would argue that “talking” to the AI interface is even close to realistic human engagement, but by and large it does seem to get the job done. The advantage is mostly to the banks, who save money since they don’t have to employ staff. The suggestion will be that there is a service-level improvement because there is no wait to engage with an AI “bot”. In the health sector the suggestion is that similar AI tools can be engaged in mental health apps, to “engage” with patients and take appropriate action in the event that there is an indication of an imminent mental health crisis. This would be the complete substitution of an AI interface in place of human interaction. Another suggested use is the monitoring of telephone calls by AI, listening for key trigger-words/phrases that might be missed by the human participants, with appropriate escalation.

Duty of care

This hypothetical scenario (particularly in terms of AI interface apps) is different than the previous examples because of the more extensive removal of the AI from human oversight, and perhaps also because it seems to distance a group of vulnerable patients from that which it might be said they need most – human interaction. Accordingly, on an emotional level it seems an area that might require an alternative approach to determining liability. However, objectively it is perhaps not so complicated where the service is provided by the Trust directly: it is really the delegation (or perhaps substitution is a better word) of a function previously undertaken by a healthcare professional. As this article has stressed on several occasions, this could not result in a lower standard of care (which would presumably trigger corporate level liability), so it remains sensible to apply a Bolam-based assessment: ie whether the outcome of the AI interaction was outside the reasonable range of outcomes that would be associated with a responsible body of human practitioners.

It may be that such applications will be developed and operated by third party providers (in a similar way to out-of-hours phone-lines). This is not the same as the entirely straightforward non-delegable duty of care situation discussed previously, where the patient attends a hospital for an investigation and has no control over, or indeed knowledge of, the fact that the imaging is reported by a third-party provider. Any final analysis would depend upon precise details, but it is likely any app would be Trust branded and most likely non-delegable duty arguments would be engaged.

It can reasonably be anticipated that a significant risk area for claims will be around the alert interaction/escalation between AI systems and their human counterparts. Considerable care will be required to ensure that where risks of harm are identified support is provided in a timely fashion, perhaps no differently than the situation now. However, systemic failings and the engagement of the European Convention on Human Rights article 2 rights is a concern.

 

 

Copyright © 2024 Maritime Insights & Intelligence Limited. Maritime Insights & Intelligence Limited is registered in England and Wales with company number 13831625 and address 5th Floor, 10 St Bride Street, London, EC4A 4AD, United Kingdom. Lloyd's List Intelligence is a trading name of Maritime Insights & Intelligence Limited.

Lloyd's is the registered trademark of the Society Incorporated by the Lloyd's Act 1871 by the name of Lloyd's.