Koko, an AI startup focusing on mental health, received backlash for relacing counselors with an AI chatbot for mental health support without informing the users. Koko’s application tested AI-generated responses using GPT-3 for over 4,000 recipients without them acknowledging the same.
Koko is a mental health service provider that provides conversational aid to people with a mobile application. It was founded by Rob Morris, who described how the application employed a “co-pilot” strategy with professionals monitoring AI as it generated responses.
Rob wrote, “We used a ‘co-pilot’ approach, with humans supervising the AI as needed. We did this on about 30,000 messages.” He added that AI-composed responses turned out to be highly rated, much more than those written by humans. Moreover, as the company noted, the response time was also cut down by well under a minute by over 50%.
Read More: Top Indian Institutes For Ph.D. In Data Science
As per Rob, the co-pilot strategy was only a test to see how the future of conversational AI help bots would be; however, it angered many users who found it unethical. One user commented, “I can’t believe you did this to people who needed mental health care. I’m shocked.”
While the experiment was successful, based on the metrics, the criticism was invalid because doing such an experiment with people who seek mental health support could go horribly wrong. It is evident that AI can give irrational and unethical responses, which would be a huge risk for those already vulnerable.