In fact, we pondered this very question back in our April podcast.
But hey relax. Because it looks like chatbots have started worrying about us.
An artificially intelligent personal therapist running on Facebook Messenger. Designed to help users address mental health challenges, Woebot delivers mood management counselling and emotional support via human-like conversations.
His personality is modelled on an inspired combination of Spock – for logic – and Kermit the Frog – for compassion and emotional vulnerability. It’s not easy being green Captain.
Creds like that stand for a lot in the wild wild west that is today’s AI development frontier.
Indeed last month Andrew Ng, co-founder of Google Brain (Google’s deep learning research project) became Woebot’s new chairman. Heralding AI as “the new electricity” Ng declared a special interest in the mental health care sector as being ripest for transformation through AI.
For many people suffering from depression and anxiety, the big barriers to seeking professional help are cost, logistics and stigma. Woebot leapfrogs all three.
Its creator, psychologist Alison Darcy, says it was never designed to replace real therapy but might be a lifeline for people who are not quite ready to bare their soul to another human. “There’s nothing like venting to an anonymous algorithm to lift the fear of judgement,” says Darcy.
Since its launch in June, Woebot’s already had conversations with more users than a typical therapist will in an entire career. Most users interact with it every day.
So here’s the thing.
In our quest to make chatbots seem as human as possible: sometimes we all might just prefer talking to a machine. Whether it’s a brisk, chit-chat-free interaction with your bank or an ephemeral confessional with a non-judgy therapy bot.
Mankind has been suspicious of robots since the word was first coined by Karel Čapek in his 1920 sci-fi play R.U.R.
But if love for Woebot is anything to go by, general resistance is melting, particularly among milennials. The increasing use of chatbots for humanitarian and wellbeing purposes is helping talkative tech win fans.
Suicide-prevention buddybots and griefbots are cockle-warming new ways in which AI is taking care of us.
Or at least helping us take care of us.
By recording conversations with his dying father, James Vlahos created Dadbot which he uses to interact with his stories and memories after he died.
“I’m celebrating his life, his legacy, his stories, his jokes, his language”, says Vlahos. “While not bringing him back, it makes me feel closer to him and a sense that death is not a complete dissipation, that this person is not utterly utterly gone.”
Therapy and counselling are labour-intensive practices and chatbot economics stack up well, particularly in hard-pressed healthcare systems.
But as investment in AI ramps up across all industries, and the intelligence gets smarter, it will be easy to trick people into thinking they’re talking to a human. But there’s every suggestion we won’t need to.
Maybe bots won’t bring people closer together.
But that’s OK.
Because in ways we hadn’t imagined, they might just help us keep ourselves together.