What if a deep learning algorithm misses a diagnosis, the doctor accepts the judgment and the patient dies? What if a surgical robot injures a patient during a procedure? Who will be held liable in the future when robots and artificial intelligence (A.I.), acting autonomously, wrong humans? As the FDA already approved the first A.I. diagnostic algorithms, lawmakers and medical malpractice lawyers should consider these scenarios as they might become reality sooner than expected.

When a diagnostic algorithm goes wrong

In 2031, Andrea went to Milan for a check-up to his GP because he felt nauseated all the time and a strange pressure on the left side of his head. The doctor suggested to him that he runs a couple of tests and informed him about involving a diagnostic algorithm in the procedure. The machine learning algorithm was trained to identify brain tumors – one of the first studies in the area dates back to March 2018 – with very high accuracy. In most cases, it diagnosed cancerous tissues far better than some trained histopathologists, but in Andrea’s case, something went astray.

The algorithm found something different than the diagnostician, and as the use of A.I. was already common practice, the histopathologist did not question the judgment. As a result, Andrea was mistreated: an unnecessary operation, ineffective medication cures and long-long weeks went by until someone discovered the algorithmic error. However, the patient’s brain already suffered irreversible damages, and the family wants to sue.

medical malpractice
Source: www.itnonline.com

What if a blood-drawing robot caused injury?

In the 2030s, the popularity of blood-drawing robots soared as they were fast, efficient and they could find the appropriate vein usually in less time than nurses or phlebotomists. They gained recognition also because one-drop-blood tests were still rare as their advancement stopped due to the infamous Theranos-story.

One morning, Greg went to the local hospital because he needed a blood test for checking on an infection. He already had some experience with blood-drawing robots, so he was aware that the procedure lasts less than a minute, and it is minimally painful. When Greg sat down, and the nurse turned on the system, the robotic arm found the vein and took the blood. However, afterwards, it did not respond to any command anymore leaving the needle in Greg’s arm for long-long minutes. He was shocked. After the staff managed to remove it, his arm wound had to be bandaged. He decided to hire a medical malpractice lawyer. But whom to sue?

medical malpractice
Source: www.dailymotion.com

How to decide who is liable?

In these two hypothetical cases, medical technologies acted autonomously and ended up hurting or injuring the patient. Needless to say that we have highly theoretical reasoning here, without knowing the particulars which refine every case down to the point where there is a particular patient with a particular condition on a particular day in a particular place.

Staying at the theoretical level, though, David Harlow, a US-based healthcare lawyer, consultant and blogger focusing on digital health believes that it is still worth breaking down the cases to key categories of concern: design flaws, implementation flaws, and user error. Thus, when we look at technology’s encounter with the doctor and the patient, depending on the case, there might be a design flaw – in which case the company might be liable, an implementation flaw, in which case the doctor or the nurse might be responsible, and user error, which might go down to the patient.

What if the robot has a design flaw?

Here, we are assuming that there was no user error, the patient could not have done anything differently, so our cases could either go down to design flaws or implementation flaws.

First of all, it is worth examining the differences in the technologies when looking at the hypothetical cases. In case of analog technologies – the first layer of technologies or traditional technologies –  that provide data or let users access data without any algorithm (e.g., stethoscope); when a design flaw causes harm to patients, “the first step on the road to being able to hold the company liable, is often a trip to the FDA, seeking a recall of the medical device for failure to comply with the FDA approval”, says Harlow.

In the case of digital technologies, constituting the second layer of technologies when looking at the level of advancement; which do have algorithms programmed into them without the code changing by itself (e.g., medical records software), the situation might be similar. As according to Harlow they might be considered a “black box,” i.e., a system that takes in some inputs and yields an output, without affording the clinician reading the output insight into the algorithm conducting the analysis, it is regulated as a medical device, the procedure might be similar to that of the analog technology.

So, in terms of liability for a misdiagnosis or for other harm caused by technology, such as a blood-drawing robot, it is instructive to consider the way in which contract and tort law have dealt with the possibility and the actuality of malpractice claims based on the use of electronic health record systems partially or in whole, explains Harlow.

medical malpractice
Source: www.miscellaneoushi.com

Where’s the smart algorithm’s place?

The third category of technologies might be the most interesting and the most problematic to regulate and deal with – deep learning or machine learning algorithms for diagnostics in radiology and pathology. These constructions might change over time.

Here, Harlow asks the question of how we know whether the algorithm is progressing in the “right” direction. He says, in these circumstances, there are at least two intersecting bodies of thought that should affect the physician’s decision-making when using A.I.: regulatory approval and standard of care. In the first case, assuming the FDA (or analog) can approve a machine learning tool that will change over time. Without regulatory approval of a device, the device may not be used in clinical practice. In the second case, the device needs to be permitted or required by the current professional consensus on the practice of medicine to be considered within the bounds of practice (i.e., it’s not malpractice to use it).

Thus, legal practice might try to create a ‘black box’ out of machine learning algorithms, as well, however, it might be more problematic here than in case of other digital technologies so that liability might go down to the company creating the algorithm.

medical malpractice
Source: www.analyticsindiamag.com

Does the diagnostician or the phlebotomist have any responsibility?

When there is no proven design flaw in the case, we have to examine whether the pathologist or phlebotomist or any other physician used the device as it was supposed to be used. Harlow says that in both cases, the professional is open to liability if he or she used the tool in a situation outside the scope of its regulatory approval, or misused it, or applied it despite significant professional questioning of the validity of the evidence surrounding the tool, or with knowledge of the toolmaker obfuscating negative facts. In any other cases, the ball falls back on the creators and the companies.

What if robots with A.I. get ‘personhood’?

However, what to do with Sophia-like creatures who already have citizenship in Saudi Arabia? What to do with fully autonomous, machine learning algorithms making decisions based on their judgment as a result of considerations that might be out of human perception?

The European Union seems to experiment with a new legal status for the future. A European Parliament report from early 2017 suggests that self-learning robots could be granted “electronic personalities.” Such a status would not mean that they could adopt kids or marry humans or other robots. It might only allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property. And how would the aggrieved parties receive any compensation? There is an idea for example by setting up a compulsory insurance scheme that could be fed by the wealth a robot is accumulating over the time of its “existence.”

medical malpractice
Source: www.dreamstime.com

Although A.I. experts and researchers criticize the report for allowing manufacturers to shake off their responsibilities, the idea might be a creative solution for a widening grey area in medical malpractice law. Other, similarly forward-looking legal notions and principles will be necessary for the near future, as Harlow estimated, the first scenarios with narrow artificial intelligence or medical robots might arrive as early as within a year at the medical malpractice law firms. Healthcare regulators, agencies, and lawyers – it’s time to look ahead!

Subscribe To The Medical Futurist℠ Newsletter

  • News shaping the future of healthcare
  • Advice on taking charge of your health
  • Reviews of the latest health technology