Artificial intelligence intended for psychiatric uses already exists today. But these algorithms, so neutral and tireless are, could they one day replace the human psychiatrist?
There are already more than twenty artificial intelligences that have been validated by studies for psychiatric uses. However, these machines have been used so far to collect data or to inform nursing staff in decision making. In fact, these technologies know still several limitsdespite its undeniable potential. Therefore, the question of whether they would ever surpass the skills of a real psychiatrist has good reason to be. To answer this, it is important to return to the role of the psychiatrist.
What is a psychiatrist?
Like a cardiologist or a surgeon, the psychiatrist is a consultant. He is capable of diagnosing patients and prescribing treatments after having undergone thorough training. This course includes a general medical training and specialized training in mental illness about 6 years.
To make his diagnosis, the psychiatrist may perform a physical and mental examination, laboratory tests, medical images and study of a detailed psychosocial history. Then you can resort to different combinable media to treat patients. These include: psychotherapies, a variety of medications, neurostimulation techniques, and social interventions.
More and more robot therapists after Covid-19
The idea of integrating artificial intelligence in this panoply of tools made available to the psychiatrist appeared in 1966. In fact, that was the year that Joseph Weizenbaum finished coding his chatbot (dialogue program), called ELIZA. Dedicated to psychiatry, the AI simulates a Rogerian psychotherapist (person centered approach) reformulating most of the statements of the ” patient “ in questions, and then ask them.
The concept has resurfaced in recent decades, especially with the context of the health crisis. More and more “robot therapists” and health status detection systems through voice have emerged. Some researchers believe that patients may have more affinity with robots than with doctors. Therefore, they are likely to develop real therapeutic relationships with these machines.
The illusion of an objective virtual psychiatrist
Theoretically, in addition to being neutral and devoid of any judgment, digital psychiatrists would have various other advantages. The fact that they have no emotions would allow them to make objective, replicable and neutral decisions. Also, unlike their human counterparts, they are not likely to do fatigue errors or not be available.
However, do not forget that the machines are coded by engineers. To collect data and train artificial intelligence, the latter are based on their internal models. In many cases, RNs have been accused of discrimination or subjectivity because of the information they were fed. Their processing processes and decisions will also depend on the subjectivity of the encoder himself and his technical options. In fact, your experience, the quality of your training, your salary or the time you spend writing your code can influence machine.
No more than the human psychiatrist?
At the moment, artificial intelligence is not completely free of subjectivity… In any case, no more than its human counterpart. In its operating modethe psychiatrist and the algorithm are not totally different.
Data collection and processing
The psychiatrist and the artificial intelligence function more or less in the same way during a psychiatric interview. They first collect the data exhaustively and relatively randomly. To do this, they are based on the medical record, gestures and reactions of the patient, etc. Then they select and process them according to what they consider relevant or insignificant. It is by organizing them in this way that the psychologist, like the machine, associates them with pre-existing profiles, with common traits, and make the diagnosis.
The construction of the model.
Each psychiatrist develops what is called “internal model”. It is a set of mental processes, explicit or implicit, that allow you to make your diagnosis. It is from this internal model that he formulates the diagnostic result. He builds it throughout his training and during his career, in particular through his clinical experiences, reading case reports, etc. This model sharpens and strengthens while practicing. Similarly, the algorithm is developed your own internal model during their initial training and apprenticeship. Note, however, that the machine here has the advantage of be able to handle many more cases than a psychiatrist will ever be able to.
The use of the model
The internal model is important, because it is from his knowledge of the result of the diagnosis that the psychiatrist or the machine will make the decisions. The doctor or RN will refer to him to treat new patients. But other factors will certainly come into play for the human being, such as your salary or workload. Likewise, for artificial intelligence, the cost of the material and the time required for its training or use will be considered.
So is the human psychiatrist more trustworthy than the machine?
Gifted with empathy, the psychiatrist could better detect the gestures that give patients away in their speeches. This option is even more important in these cases. suicidal or victims of domestic violence, for instance. It would also be better able to identify the patient’s actual problems and access very different temporal information. In fact, it is more flexible than AIin the sense that it can renew the exchange in real time according to the answers obtained.
the lack of physicality it is also a major limiting factor of AI in psychiatry. However, this element is crucial in the management and treatment of mental illness. The clinical interview is characterized in particular by doctor-patient dialogue. Speech and non-verbal communication are essential.
At the moment, the psychiatrist seems definitely more reliable for the care and treatment of mental illness. However, artificial intelligence serves as valuable tools that help speed up data collection, diagnosis and treatment quality.
SEE ALSO: Top 10 Things AI Does Better Than Humans
Should we forget the concept of virtual psychiatrist?
As in most AI projects in other sectors, the prowess of recent entrants does not rule out limiting factors. Until now, artificial intelligence remains a utensil instead of a substitute of a function occupied by humans. But no one knows what the future holds given that the AI we know today is said ” weak “. So no one knows yet what the next developments hold in the matter.
Julia: the virtual psychiatrist capable of diagnosing severe depressive disorders
Earlier this month, the Sanpsy lab team announced that they had designed a revolutionary virtual psychiatry. According to your description, it is the first virtual human capable of conducting a psychiatric interview to diagnose depression. His work has been documented in the magazine scientific reports.
The AI is called Julia. She has a fairly conventional face and voice. For added realism, she was trained to make gestures and facial expressions based on conversation. To do this, the engineers used motion capture technique.
“The interview between this virtual human and the patient is constructed from a validated medical reference (editor’s note: based on DSM-5, the diagnostic and statistical manual for mental disorders created by the American Psychiatric Association), enriched through turns of phrase and gestural and facial interactions that reinforce the patient’s commitment to the exchange. »
Pierre Philip, hospital doctor at the University Hospital of Bordeaux and director of the Sanpsy unit (sleep – addiction – neuropsychiatry) of the CNRS
Pierre and his team have discovered that the performance of this AI increases depending on severity of symptoms depressed. The results he has published so far are therefore promising. However, they still do not match those of a real doctor. The latter would be More reliable to diagnose from mild symptoms.
“The challenge is not to replace the doctor, but to help the doctor more quickly diagnose unidentified depressive patients and, possibly, in the future, ensure quality medical follow-up at the patient’s home. »
According to its designers, Julia was designed to guarantee quality home health care of the patient So far, the algorithm has recorded “an acceptability score of 25.4/30 of the patients”.
An AI that does it better than several human radiologists
In 2020, a study of 25,000 patients in the US and UK was conducted to testing an AI in breast cancer. Scott McKinney of Google Health in Palo Alto, California, and his colleagues evaluated an algorithm for deep learning about his ability to recognize breast cancer on mammograms.
In this study, the machine had no pre-set idea of what breast cancer is. She only referred millions of iterations to repeat images. Ultimately, he far exceeded the performance of several human radiologists.
“In this study, AI was simply better at doing this task than human comparators, and I think that really shows how powerful this technology already is. And given this ongoing process of self-improvement that these algorithms go through, you can project 10 years from now… where we might think these breast cancer algorithms will be. So I think that really sets the stage to start looking more specifically at cases that might have more relevance to psychiatry.”
Richard Cockerill, MD, assistant professor of psychiatry at Northwestern University Feinberg School of Medicine in Chicago
#Virtual #psychiatrist #replace #psychiatrist