OpenAI is expanding its artificial intelligence safety measures with the introduction of a new 'Trusted Contact' feature for its conversational AI, ChatGPT. This initiative aims to provide an additional layer of human support during potentially serious mental health situations, particularly concerning self-harm.
How 'Trusted Contact' Works
The 'Trusted Contact' feature allows ChatGPT users to choose a trusted adult—such as a friend or family member—as their emergency contact. Should ChatGPT's automated systems detect conversations that indicate a serious risk of self-harm or suicide, a trained human reviewer will assess the interaction. If a significant safety concern is confirmed, the system can then send a notification to the designated trusted contact.
The Process for Setting Up a Trusted Contact:
- Selection: Users choose one trusted adult as their emergency contact within ChatGPT.
- Invitation: An invitation is sent to the chosen individual via email, SMS, WhatsApp, or in-app notification.
- Acceptance: The selected person must accept the invitation within one week. If they decline or do not respond, the user will need to choose another contact.
- Detection & Review: If ChatGPT's AI detects conversations around suicide or self-harm, trained human reviewers may assess the conversation for potential danger.
- Notification: If reviewers identify a serious safety concern, ChatGPT can send a notification to the trusted contact, encouraging them to check in with the user.
OpenAI emphasizes that chat details or conversation transcripts are not shared with the trusted contact. The notification merely encourages the contact to reach out to the user, respecting user privacy while facilitating real-world connection during difficult moments.
“Sometimes, when you are having a hard time, it can feel difficult to reach out or ask for help directly. The trusted contact feature is designed to support real-world connections in those moments,” OpenAI stated.
Users are advised to carefully select someone with whom they feel comfortable being honest and who they trust to respond with care and empathy. The invitation to the trusted contact clearly explains ChatGPT's role and the nature of safety concern notifications.
This expansion of AI safety measures underscores OpenAI's ongoing commitment to user well-being and responsible AI development, integrating human support into automated systems for critical situations.