A JAMA article about AI-based conversational agents published in August compelled Founder Alison Darcy and CCO Athena Robinson, along with Scientific Advisor Paul Wicks, to write an opinion piece about how agents should be understood, designed and regulated. In this Q&A, Alison gives background about the Viewpoint, which was recently published.

Your Viewpoint (as JAMA calls op-eds) is in response to an article that explored the clinical, legal and ethical implications of AI-based conversational agents (CAs). What prompted you to write a response?

We’re thrilled to see dialogue about this topic because it’s an important one. But we also thought it was important to clarify that not all conversational agents were built to replace humans, which the article implied. We responded to this idea because we feel it can be a loaded and sometimes emotive assertion. We think of Woebot and CAs more broadly as a tool in the toolbox for building a robust and comprehensive ecosystem of mental health services. Other CAs might be built to replace humans, and we agree that those should be carefully considered, but we feel that regulatory frameworks exist, and are adequate to assess a technology’s intended use and risk-benefit ratio.

This idea about agents replacing humans has been around for a while. Why does it persist, and why is it so emotive?

Because these agents use conversation, as do humans. So it’s very easy for people to think that the intended use is to replace human connection. But a conversation with a bot is usually fulfilling some kind of task. It is a very sophisticated and natural interface, not a replacement for anything that a human can do uniquely well. Actually one of the benefits of CAs is that they are explicitly not human. Studies have shown that people are more willing to disclose to an AI than a human, and we know that chatbots are preferred when talking about subjects that are highly stigmatized.

Of course it depends on the bot…

Absolutely. We learn the boundaries of Siri or Google Assistant from talking to it. You find where the edge of what it can do is, and then you tend to pull back, calibrate your expectations, and use it to fulfill the task effectively. This is why transparency is so important. CA designers need to go out of their way to signal appropriate expectations of the role of the particular CA.

How does Woebot Health set user expectations?

We’re transparent about what Woebot can and cannot do. For example, during onboarding, all users must acknowledge that Woebot is not meant to be a crisis service and cannot intervene. Our Safety Net Protocol is designed to call out the limitations of the service. If Woebot detects language suggestive of crisis, the user is told the reason why, something like “Because you said [detected language], it sounds like you might be dealing with something that I’m not equipped to help you with…” and Woebot directs the user to a curated list of emergency resources. That’s a really good example of how you can design something that clearly calls out its role and doesn’t cross the boundary of competency. Transparency is crucial. It’s the foundation on which any relationship with Woebot must be built, because it’s based on trust.

What is Woebot Health’s approach to regulation?

We embrace it, because we’ve always believed that people should demonstrate clinical outcomes. The great thing about regulation is that it’s a universally understood bar of quality. There are existing regulatory frameworks that work just fine. And actually because they are mature, those frameworks are the best placed to judge a conversational agent or any technology that claims clinical benefit.