News


Latest Faculty of Information News

From telemedicine to ‘therapy bots,’ professor studies ‘virtual care’ spectrum

Submitted on Monday, December 18, 2017

Imagine a therapist who exists only on Facebook. Or a doctor who’s just an avatar although that avatar may occasionally be the embodiment of an actual existing doctor. In recent years, there have been a variety of apps and programs designed to push artificial intelligence into the mental health space, says Cosmin Munteanu, assistant professor in the Faculty of Information.

Munteanu, who studies ethics in human-computer interaction, looks at issues related to “virtual therapy” or “virtual care” including the effectiveness of so-called “therapy bots.” His research covers everything from “classic telemedicine” to chat avatars powered by humans to AI chat bots like the Woebot, which operates on Facebook and has garnered a lot of media attention as of late.

The Woebot uses Cognitive Behaviour Therapy, which its creators says is a scientifically validated approach to mental health. Munteanu is researching where “AI agents” like Woebot fit into the mental health care ecosystem

Woebot’s creators describe it – or, depending on your perspective, him or her – as “your charming robot friend who is ready to listen, 24/7, track your mood, give you insight, teach you stuff and help you feel better.”

As they study people’s relationships with “AI agents” like Woebot, Munteanu and Faculty of Information Masters student Jaisie Sin are posing questions like: “How do people trust them? Do they trust them too much? What are the factors that lead to adoption in the short term and long term? What are the dangers and the implications?”

While the effects of “virtual” psychological treatment are only beginning to be thoroughly studied, other earlier research on humans’ relationships with machines shows we are prone to form emotional attachments even with the likes of a Roomba robot vacuum cleaner.
“People cry when it breaks down because we anthropomorphize these things,” Munteanu recently told CBC Radio’s The Current. “We attach human feelings even for something that’s just a mechanical disc with wires and wheels that just vacuum cleans. We are not attaching the same way to our regular vacuum cleaner. We’re attached to someone that appears to be intelligent.”

Munteanu’s team’s virtual therapy research focuses on issues of trust, usability and adoption primarily among seniors. Along with his cross appointment at the Faculty of Information, he is also a co-director of the Technologies for Ageing Gracefully lab and an assistant professor at the Institute of Communication, Culture, Information and Technology at the Mississauga campus.

“We are becoming more friendly to these things,” he says referencing chat bot therapists. “They are becoming more friendly to us, and that establishes trust, so the question is, how much do we trust them?”

Woebot operates on Facebook Messenger

Munteanu is also interested in how this trust in an AI agent compares to trust in a real life therapist, who may be more intimidating and not as “friendly” as therapy bots.

Woebot’s creators say it is not designed to replace real-life therapy or counselling but to complement it by offering cognitive behaviour therapy. People can benefit from it it in the way they would a self-help book. It is designed to fill a gap in the mental health system.

In his lab, Munteanu, who describes himself as a techno optimist, is exploring where therapy bots could fit into “the ecosystem of care. The way we see this is we need to be careful where we place the technology and where in the chain of therapy, the technology comes in.” While tech savvy older people often like AI bots, Munteanu says there are “a lot of red flags” when it comes to aging and dementia.

On top of all this, there is the issue of regulation, whether it is needed and, if so, how it will be done and by whom?

“When you see a therapist, the therapist is responsible. When you talk to AI, who’s responsible? Is it the programmer? Is it the company that packages it? Is it the people who collect the data because AI doesn’t work without large amounts of data? Is that where the legal responsibility is? Is it shared? We’re not sure yet.”

Munteanu notes that some AI therapy products currently carry a disclaimer saying they are for entertainment purposes, which he calls “ironic” under the circumstances.

Another issue is the data collected by this kind of bot. If doctors record therapy conversations, they almost always have to obey policies and regulations, but there are no regulations for therapy bots, which Munteanu says are way ahead of the regulatory system.

Woebot is collecting data based on what users or “bot patients” tell it during their sessions, which may amount to little more than a Facebook message or two at a time.

And then there’s the issue of how well the technology works. As even casual users of speech technologies like Siri know, there’s still plenty of room for miscommunication. And with text processing, says Munteanu, there are none of the visual cues we get from in-person discussions. How would a chat bot determine if it’s the truth when a patient texts, “it’s all good now and I feel better”?

“There needs to be a human at other end,” he explains, comparing the situation to a technologically advanced jet plane that still has at least two pilots on the flight deck. The studies underway in his lab and elsewhere are designed, says Munteanu, “to make room for these technologies to develop in the proper way.”

Cosmin Munteanu will teach INF2169H User-Centred Information Systems Development during the 2018 winter term. You can hear the full interview with him and Woebot’s creators at CBC’s The Current

Filed under: