Chatbots have come along way from Clippy, the much-maligned paperclip-shaped Microsoft Office chatbot of the late 90s and early 00s. Chatbots are now regularly used by airlines, banks, and e-commerce sites and we’ve come to rely on them to solve common problems. And while we talk to Siri and Alexa every day, we don’t talk to them about personal problems the way we would a friend. We don’t expect chatbots to provide advice about how to deal with anxiety or manage depression. But what if they could? Researchers at the Centre for Artificial Intelligence Research (CAIR) at the University of Agder are developing a chatbot that will help young people struggling with social health problems such as social isolation, anxiety, eating disorders, depression, and self harm.
Raheleh Jafari, a postdoctoral researcher from Iran, is one of the CAIR researchers working on the chatbot. She is an expert in fuzzy logic, a type of mathematical model that can be used to help the chatbot think more like a person would. At the University of Agder, she uses her expertise to generate the data that is used to train the chatbot to accurately answer questions.
Since users are going to be asking the chatbot questions about serious topics like anxiety, depression, and suicide, it’s incredibly important that the bot is able to understand their questions and respond appropriately. To teach the bot how to respond, Raheleh and her coworkers started by generating thousands of potential questions and answers. “A chatbot is only as good as the data it is given,” she explains. “If you don’t give your chatbot enough data and then you ask it a question it hasn’t seen before (i.e. that’s not in its memory), it won’t reply correctly.”
This is where fuzzy logic comes in. Rather than come up with every, single potential question and answer that the chatbot might need to carry out a conversation, Raheleh uses fuzzy logic to get the chatbot to think more like a real person. Computers are programmed to perceive things as either true or false. According to a computer, a person is either “tall” or “not tall”. But humans recognize that someone who is 5’7’’ (170 cm) is “not tall” in a different way than someone who is 5’1” (155 cm). We can also see that someone who is 6’6” (198 cm) and someone who is 6’2” (187 cm) are both “tall”, but not equally so. When using fuzzy logic, the computer thinks not in terms of “tall” and “not tall”, but rather in terms of different degrees of height. In the case of the chatbot, this means that if you give it a data set of 2,000 possible answers, the bot can apply fuzzy logic to expand the data to over 10,000 possible answers. The chatbot is then programmed to learn from this data with algorithms and machine learning techniques.
Raheleh and her colleagues have used this approach to develop a high quality chatbot model that can talk to and counsel young people aged 16 to 25 with divorced parents. Thanks to the success of the divorce chatbot model, Raheleh was able to hire six undergraduate students to do a project for the chatbot. She and her colleagues are now working on expanding their model to tackle other social problems young adults face.
The collaborative environment is what Raheleh enjoys most about working at CAIR. “No one here is jealous or status obsessed. Everyone supports each other to lift up the group,” she says. “When someone improves, everyone else is happy because the group is improving. There’s no I here, it’s we.” Raheleh has already seen herself learn and improve so much since starting her postdoc at the University of Agder last year--and it’s helped her improve the chatbot too.Continue reading