Inspiration
Following the closure of Nightline at the University of Bath and the current gap in similar non-directive talking resources available, we created this chatbot following similar guidelines in order to allow University of Bath students to continue accessing the service provided by Nightline while it remains inoperative.
What it does
BathLine, our AI chatbot, follows non-directive principles and asks open-ended questions to facilitate conversations regarding mental health. It focuses on empathising with the user, building trust and allowing a space for them to vent out their problems. These specific principles were integrated into the model based on validated psychological interventions, and prior meta-analyses regarding chatbots for mental health often highlighting user dissatisfaction due to their inability to allow the user to speak to it without offering advice (Denecke, Abd-Alrazaq and Househ, 2021; Chandra Mohan Dharmapuri et al., 2022). It remains operational only between the hours of 8pm-8am, in order to prevent the common problem other mental health chatbots face of over-reliance. It deletes all user information once the output is produced to maintain confidentiality.
How we built it
The construction was particularly difficult as our team did not include a computer science student, and none of us had much experience with coding outside of basic modelling programs. We split into two groups, building the frontend and the backend. One pair worked on building an API and training an AI to use for our chatbot using Python. We were intent on training our AI ethically, so we used Nightline training example transcripts and created more transcripts based on those to train our model. The other pair worked on developing the frontend, which required learning TailwindCSS and JavascriptXML. We designed a webpage that was build to be used in the dark by distressed individuals, with the colour theme focused around soothing deep blues.
Challenges we ran into
Training the AI was incredibly difficult, with many roadblocks thrown our way. We repeatedly had issues with the API keys, and training the AI model was an ever increasing challenge. As this was our first time attempting to train an AI model, we did not have the most efficient solution, and thus found that we did not have enough time to sufficiently train it to produce consistent results. In addition, due to confidentiality issues, no database was available online for us to ethically train our model, so we had to create examples to train the model. The frontend was had many issues of its own, where our unfamiliarity with the coding languages meant that debugging issues took a long time. What may have been a relatively quick aspect to the project resulted in taking an extended amount of time.
Accomplishments that we're proud of
Everything. We are not computer scientists. This was an incredibly difficult endevour and we are very proud of ourselves.
What we learned
We were forced to learn on the go, introducing ourselves to new coding languages and AI training concepts we had never encountered before. By the end of the Hackathon, we were able to somewhat confidently code and debug our project, which felt like a huge achievement.
What's next for BathLine
In its current state, BathLine is not suited for high-risk cases regarding suicide and safeguarding as the protocols that need to be followed for those particular cases are highly specific, and our AI was not trained on such cases in a short time. We aim to add a sound element, with a brightness feature alongside it, to the website, in order to signal to inactive users that the AI is "still listening" and detect potential safeguarding risks. This would solve issues in current chatbots regarding interaction and cues that signal "being heard"; a major aspect of successful interventions. Further integration of the AI with university safeguarding services and emergency services, alongside university signposting is a next step to make it a useful resource for Bath students.
What is NOT next: Edits to website design to include further links; the website is made to immediately allow the user to jump into conversation and build trust with the bot rather than be overwhelmed with other information while in a troubled state (Mbawa, 2021). Edits to website design to make it brighter; a light mode will be added to increase accessibility, but a first-time user will be directed to the current version as colours were chosen to work when viewed in the dark (due to the hours it functions.) The AI will not be trained on non-directive, therapeutic approaches to avoid bias, focus on WEIRD (Western, Educated, Industrialized, Rich, and Democratic) communities and other ethical issues that plague current mental health chatbots. As an extension of the prior point, the AI will not be trained on confidential therapeutic transcripts.
Built With
- javascriptxml
- openai
- python
- tailwindcss
Log in or sign up for Devpost to join the conversation.