A safer digital world, one chat at a time
Sustainable Development Goals: 5, 9
- SDG 5 - Gender Equality
- SDG 9 - Industry, Innovation and Infrastructure
The Safer Chatbots project improves young people's experiences with automated text services when seeking for help.
Nandi’s story is representative of one of the ways in which young people around the world are using automated services like chatbots to disclose traumatic experiences including Gender Based Violence, in an attempt to get help. But as highlighted in a UNICEF Learning Brief, too often, they are failing to get the support they need. They may instead receive error messages, be nudged towards irrelevant information, or worse, an automated response that exacerbates their feelings of isolation or guilt, doing further harm in the process.
The Safer Chatbots project led by the UNICEF East Asia Pacific Gender Section, aims to change this, firstly by raising awareness of this cross-sectoral problem, but also by offering a solution in the form of tried-and-tested, open-access safeguarding mechanisms. Our approach was developed in collaboration with experts in technology, Child Protection and Gender Based Violence, and piloted in multiple countries and languages to ensure its replicability. By implementing Safer Chatbots guidelines and technical templates, you can ensure that girls like Nandi:
- receive an instant, automated acknowledgement that what they typed may indicate they’re in distress;
- are offered the choice to correct the chatbot’s assumption OR seek further help from a trained (human) professional;
- are provided with warm, empathic messages of encouragement if they confirm they’re in distress;
- are given clear referral details for appropriate services, as well as a safe word to discreetly trigger the information again in the future.
The Safer Chatbot implementation guidance offers options for all levels of chatbot (with or without AI), and are designed for popular chatbot building platforms such as Turn, TextIt, RapidPro and Bothub - but our approach is platform-agnostic. As well as simpler, keyword-based mechanisms covered by our DIY implementation guidelines we teamed up with Girl Effect and Weni to develop an open-access AI model whose aim is to ensure a higher degree of sophistication and accuracy in responding to users in distress. This ‘plug-in’ solution is available for reuse and adaptation, free of charge. We are actively seeking partners with access to anonymised disclosure examples to ensure we improve the model’s global accuracy and relevance.