Pressure is rising on healthcare systems around the world. Ageing populations, the increased cost of medical care, and more and better treatments have combined to create a situation that might reasonably be described as a ticking time-bomb. Pretty much every developed country in the world is looking round for ways to manage increased demand within no more than existing resources.
Developing countries may face slightly different challenges, but there are certainly few easy ways to ensure universal access to healthcare at a reasonable cost. To make matters worse, healthcare professionals from developing countries are often lured away by the higher wages available in the developed world.
It is, perhaps, not surprising that many healthcare policy-makers are thinking seriously about how chatbots could help to deliver healthcare. There are, however, a number of questions that need to be answered, including what chatbots might be able to do, and whether practitioners and patients alike are ready for the introduction of chatbots.
Helping out and reducing pressure on services
Algorithms and chatbots are already in use in a number of systems. For example, the NHS’s 111 telephone helpline, which provides non-emergency healthcare advice, uses an algorithm to decide responses. The telephone is answered by real people, not chatbots, but it is not hard to see an extension of the service that would use chatbots. A report of the initial evaluation of the NHS 111 service, back in 2012, found that two thirds of people had found the advice they received helpful, and that most had complied with it. Around three quarters were very satisfied with their overall contact with the service.
Anecdotal evidence about the wider roll-out is slightly more mixed, however. One survey in North West London found that around 80% were satisfied. However, there is also evidence that the service did not decrease pressure on emergency departments in its first year of operation. This, in turn, suggests that the algorithm might need improving; perhaps an artificially intelligent, machine-learning system might be able to do better?
Chatbots like Your.MD in Messenger have been introduced to provide basic healthcare advice and diagnosis. Like NHS 111, they are not seen as a substitute for medical care, more a signposting service. Other bots, like Florence, again available through Messenger, are designed to remind people to take medication at the right times, providing a useful service and potentially reducing some of the unnecessary pressure on clinicians.
It seems likely that these services will become more sophisticated over time. They may, perhaps, become more like digital health assistants. It is not hard to envisage them being able to make appointments with healthcare services, and even alert someone to a change in their vocal tone that might indicate health problems.
Changing culture and professional habits
The evidence suggests that patients are prepared to accept the use of chatbots and similar systems as the price for being able to access health advice more quickly. But are healthcare practitioners ready to embrace chatbots? Their acceptance of NHS 111 suggests that they are probably prepared to accept anything that would reduce pressure on services. Doctors and other health professionals have also demonstrated that they are happy to use technology to consult colleagues and to contact patients. For example, there have been a number of articles suggesting that WhatsApp is widely used for this purpose because of its end-to-end encryption, although there are concerns about patient confidentiality.
This, however, is slightly different from the proposal that professionals might use chatbots or AI systems to get advice and treatment options. Professionals tend to place a high level of value on their professional opinion and experience, built up over years. They are prepared to consult colleagues, but an app? It seems unlikely. It would certainly require a considerable culture change.
Partnerships between clinicians and chatbots
Doctors have, however, shown that they are willing to embrace the use of robots for surgery, often in partnership with human surgeons. This, in fact, seems to be the crucial feature: artificial systems working in partnership with doctors, rather than replacing them. A chatbot that provided access to the most up-to-date guidance on evidence-based practice, for example, would enable doctors to make informed treatment recommendations to patients. And chatbots that provided accurate recommendations to patients about when (and when not) to consult doctors would hugely improve the efficiency of healthcare systems.
The evidence suggests, in fact, that the question is not whether the healthcare community is ready for chatbots, but rather whether chatbots are ready for use in healthcare.