The human touch is indispensable in medicine
In the last weeks, I decided to test Woebot, a little algorithmic assistant aiming to improve mood. It promises to meaningfully connect with you, to show bits and pieces of empathy while giving you a chance to talk about your troubles and have some counseling back in return. Just as a human psychologist does.
At first, it was weird for me to imitate a conversation with a chatbot personalized into a robot, as I was very aware of the fact that I’m chatting with a programmed answering machine. Sometimes I found it amusing, it even had some uplifting words of wisdom to offer, but there was also a day when I thought how sad it would have been if I only had talked to Woebot that day. I could have thought that I’m so lonely and miserable that my sole companion is a chatbot. That day, it clearly failed. Utterly. It could not make me believe that it’s a viable alternative to a human. But the question is: should it make me believe in that?
The human touch is the key part of practicing medicine. It is an integral part of the patient-doctor relationship, where patients feel that they are taken care of by a fellow human being, they are not alone in need. There is someone who not only understands their problem cognitively and offers a solution, but can easily “get into the other person’s shoes” in the first place. Research proves that this ability significantly boosts the healing process. For example, diabetes patients who had compassionate physicians had a lower rate of disease complications than their peers. People who caught a common cold, perceived their condition less severe when they encountered an empathic medical professional.
We are social beings; we need a caregiver to tell us everything is going to be fine. But then the question arises. Why are we building chatbots like Woebot, virtual assistants like Nadia or robots like Nao or Pepper?
Is there a place for mimicking empathy in healthcare?
At the dawn of modern healthcare, around the turn of the 18th century, medical professionals started to alienate themselves from the patients not looking at them as persons, rather as symptom carrying medical cases to solve with the help of science. French philosopher, Michel Foucault even dedicated a book to explain what happened around the birth of the modern clinic. It has been a dehumanizing process for both patients and doctors. People with problems are considered as just (statistical) numbers and symptoms in crowded waiting rooms, while doctors have only a few minutes for each patient on average and have to go on with their packed schedule as soon as possible. Thus, it is not surprising that patients experiencing grumpy doctors try to turn to empathy coming from someplace else, and digital technology tries to tap into that existing gap.
And why do medical professionals have so little time for patients? They are suffering from the burden of administration, hideous monotonous tasks and the lack of colleagues. Doctor shortages are global phenomena. The World Health Organization (WHO) estimates that there is a worldwide shortage of around 4.3 million physicians, nurses, and allied health workers. At the same time, the need for healthcare services is rising: illnesses are becoming easier to catch, civilizational diseases such as diabetes and obesity is on the rise while aging societies need more and more care. So medical virtual assistants, healthcare chatbots or humanoid robots with a pinch of empathy seize the moment and claim their places as new helpers of medical professionals.
How would technology with emotional intelligence change social relations?
Looking at the practical side and the harsh facts, it seems that digital technologies being able to reach out to patients through empathy and compassion should have a place in healthcare. Although from the perspective of the human-technology relationship as well as the interactions between humans themselves, it’s more problematic. Would patients trust or accept AI-based robots or chatbots as their companions during hard times? As the brilliant movie, Ex Machina asked the question, would you believe it if a robot could show authentic emotion?
And what is the psychological or individual reason why we want to program robots and algorithms to emit human emotions? Is it a further step of estrangement and alienation in an already estranged world full of smartphones and televisions? Does it become so difficult to reach out to real people and engage in meaningful relations that the solution seems to be to build an echo of the human emotional spectrum? Is it a coincidence that research and development into empathic and emotionally charged robots are the most advanced in Japan where over 60 percent of unmarried people aged between 18 and 34 have no relationship with a member of the opposite sex? Might future generations grow up with empathic robots and emotional algorithms? Do we want to create technology so emotionally intelligent that they will not be able to tell the difference between a human and non-human apology anymore?
Technology has not reached the stage for developing empathic algorithms yet
There are so many ethical, moral questions and possible outcomes regarding empowering technology with humanoid features. However, time is pressing for figuring out our possible responses and attitudes towards emotionally intelligent machines, as experiments for modeling human emotions with the help of machines is on-going and there are already amazing results!
Japanese company SoftBank is unveiling a robot that can tell a joke and converse in four languages. Another Japanese venture, AIST developed the PARO interactive robot allowing the documented benefits of animal therapy. It comes in the shape of a baby harp seal covered with soft artificial fur to make people feel comfortable as if they are touching a real animal. This therapeutic robot has been found to reduce the stress factor experienced both by patients and by their caregivers.
Mark Sagar and his team at Soul Machines is working on the BabyX project, a virtual baby powered by AI modeled according to the already known workings of the human organism. When BabyX smiles, it’s because her simulated brain has responded to stimuli by releasing a cocktail of virtual dopamine, endorphins, and serotonin into her system. This is part of Sagar’s larger quest, using AI to reverse-engineer how humans work. It is simply amazing! Their ultimate aim is to build virtual assistants able to mimic humanoid features, as in the case of the above-mentioned Nadia project. Yet, if I look at the struggles and challenges around Nadia or my experiences with Woebot, I have to recognize that there are still several years until technology reaches the point where mimicking empathy by machines becomes possible.
Learning interactive patterns through virtual reality
Although digital technology is not yet able to show soft skills such as compassion to a credible extent, it can help real people practicing their own. Embodied Labs created “We Are Alfred” by using VR technology to show young medical students what aging means. Everyone can be the hypothetical Alfred for 7 minutes, and experience how it feels like to live as a 74-year-old man with audio-visual impairments. The developers’ ultimate goal is to solve the disconnection between young doctors and elderly patients due to their huge age difference.
Researchers at the University of Michigan and Medical Cyberworlds, Inc., used the virtual human technology called MPathic-VR, a computer application, which allows students to talk with emotive, computer-based virtual humans who can see, hear and react to them in real time. Thus, they can practice and develop their empathic side, so if they had to deliver bad news about a condition to a family or the patient itself, they would already know how to communicate it better.
Yet, this might just be the logical next step in the process. Here, virtual humans having a limited number of options for human interactions are teaching real people. What if the next time, those virtual humans will directly deliver messages to patients? Would that be so much different than hearing the same message from a human – who learned its interactive pattern from a virtual reality platform? And looking at the set of problems from another side: when a doctor forces himself/herself to show empathy, is it better than when a robot or algorithm does the same?
What happens when patients realize the empathy they receive is not real?
My opinion might be controversial here, but I think what really matters is what the patient feels. If a patient’s journey is facilitated either by a compassionate physician or an algorithm that makes them feel the same, healthcare does its job.
What is important for patients, medical professionals, as well as tech companies, is to realize that artificial emotions cannot replace human interaction, empathy, and compassion. But a coded gesture coming from a machine might reach its goal to temporarily offer comfort, especially when its limitations are fully acknowledged and accepted. So, if you don’t expect the machine to act like a real human with unique reactions but rather as a programmed robot with guessable responses and gestures, you cannot get disappointed.