
On Tuesday, Google shared for the first time details about how its AI chatbot Gemini is designed not to act like a companion or claim to be human when interacting with minors.
The information, published by the company in a blog post, was announced among changes to better support the mental health of users engaging with Gemini.
Child safety and mental health experts have long worried that companion-like chatbots are too dangerous for teens to use. Last year, the advocacy group Common Sense Media rated the teen and under-13 versions of Gemini as "high risk" after its researchers determined that the chatbot exposed kids to inappropriate content, including sex, drugs, alcohol, and unsafe mental health "advice."
The group recommended that no one under 18 turn to an AI chatbot for companionship or mental health support.
Google said that Gemini has "persona protections" when engaging with under-18 users. The longstanding constraints are designed to prevent emotional dependence and avoid "language that simulates intimacy or expresses needs," according to Google. Other safeguards should help discourage the chatbot from bullying and other types of harassment.
"Our safety efforts continue to evolve and reflect our ongoing commitment to creating a healthy and positive digital environment where young people can explore and learn with confidence," Google said in the company's blog post.
Google also announced that it updated Gemini to streamline resources for users who may seek or need mental health resources. A new "one-touch" interface will offer varied connections to crisis hotline resources, including via chat, call, and text.
That interface will appear throughout a conversation with Gemini once it's activated. Google said that it is trying to prioritize helping users receive human support. Additionally, Gemini's responses are supposed to encourage help-seeking instead of validating harmful behaviors and confirming false beliefs.
In March, Google and its parent company Alphabet, were sued by the family of an adult man who allege he killed himself at Gemini's urging.
"Gemini is designed not to encourage real-world violence or suggest self-harm," Google said in a statement at the time. "Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect."
If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.





















