The study, from the University of Stanford, shows that face and speech software on your phone can detect depression based on your facial expressions and speech patterns:
The researchers fed video footage of depressed and non-depressed people into a machine-learning model that was trained to learn from a combination of signals: facial expressions, voice tone, and spoken words. The data was collected from interviews in which a patient spoke to an avatar controlled by a physician.
In testing, it was able to detect whether someone was depressed more than 80% of the time. The research was led by Fei-Fei Li, a prominent AI expert who recently returned to Stanford from Google.
The article did caution that, due to the way the study was conducted, the therapeutic applications aren’t clear. According to David Sontag, an assistant professor at MIT:
…that the training data was gathered during an interview with a real clinician, albeit one behind an avatar, so it isn’t clear if the diagnosis could be entirely automated. “The line of work is interesting,” he says,“but it’s not yet clear to me how it’ll be used clinically.”
I have two thoughts about this type of treatment. First, this is fascinating – and this type of technology be helpful in terms of closing the gap between those who have access to treatment and those who don’t. In addition to studies like the one above, Stanford has also developed apps which can be used to treat depression – and which apparently work. I’ve touched on this topic in previous entries as well: Apps which treat depression can work.
In other words, apps and automated programs can help to treat depression. That’s fascinating to me – I never would have believed that depression could be treated without a live, human person, but apparently it can work.
On the other hand, there are some rather frightening potential applications of this sort of treatment. First is privacy: I am sure that any app working right now is operating with the strictest of privacy measures and data safeguards, but as we have seen repeatedly, data hacks and breaches occur with relative consistency. This has some very serious implications for something like text therapy or therapy which occurs over a device, because it begs the question: What data is recorded, and how could it potentially be accessed? I mean, I’m pretty open about the fact that I receive treatment, but even I wouldn’t want the items I discuss with my therapist broadcasted to the whole word. Is that possible with these apps? I don’t know, and it may not be. But there are real privacy and technological concerns which must be addressed when it comes to therapy delivered electronically.
Second: Can someone be diagnosed against their will? I don’t think so. Not yet, anyway. Later down the line, programs like the one discussed above may also have issues with consent. It seems to me that the Stanford program is not yet ready to be used in a public or even therapeutic setting. But, when it is, will people be able to use it on others without their consent? That…that’s kind of a scary thought.
These are questions which are only somewhat hypothetical. Technology is clearly advancing, and I can only hope privacy and ethical safeguards can advance with it.
Let us know your thoughts in the comments below!