I have always been a big fan of the Turing Test. When I first attempted "Computer Studies" O Level, in a school with no computers, the history of computing was actually quite brief. The terminology very basic, as were the languages. However, above all the acronyms and bits soared the principle of the Turing Test.
A Test so beautiful in its conception that you can explain it to a small child "are you talking to a person or a machine?"
It now seems likely that we will actually achieve machine intelligence in my lifetime, that the Test will be passed, that we won't be able to tell if our casual conversation is with a human or a machine.
But is the Test good enough for a Medical Artificial Intelligence?
The current Test only requires that the tester remains unable to distinguish between a person or a machine. A clinical Test will also require that the patient can not only not tell if it is consulting a human or machine medic, but also that the intelligence is most likely correct and appropriate for the clinical interaction.
Our first issue is that sometimes medical problems have no answers, since they are often not problems to be solved, but statements of fact. These quasi philosophical challenges "why am I so unhappy?" Do not have right or wrong answers, but they can be judged as appropriate in the eyes of the questioner.
Principle One- The patient should feel understood and that their question answered by the entity.
Our second issue is that with enough questions any fixed answer from a large data set can be reached, play "Akinator the Genie" game to appreciate that, but imagine being the patient on the other end of that algorithm, forced to already identify a single topic and stick to it, fact g endless questions. Humans will always be human and extraneous data will always occur. I remember a case from medical school of a man who was certain that his Lymphoma had been triggered by eating UHT cream. Would the AI discard that information as quickly as the human? Would it go on to ask about other food allergies? So the questioning must be person centric, flexible and time bound, this is a judgment by the patient and something a reasonable body of clinicians would feel adequate.
Principle Two- The patient should feel humanely questioned not interrogated by the entity.
Finally the Clinical Turing Test needs to be compassionate. One of the joys of the brilliance of Sherlock on the BBC is the dismay shown by clients when their cases are solved by blindingly obvious deductions and they are summarily dismissed. The medical equivalent of "Yes, yes, you have cancer and will die there's nothing I can do- goodbye " is not acceptable. The human questioner will pose the toughest question of all - does this machine/doctor care about me? Before they decide if that machine/doctor can care for them.
Principle Three- The patient should feel cared for by the entity.
Hopefully we're now prepared for the onslaught of Medical AI, but if you're struggling to recall the principles just look at the motto of the Royal Colllege of General Practitioners
Cum Scientia Caritas
Compassion (empowered) with Knowledge