
Introduction
Google’s flagship AI product, Gemini AI, is facing serious scrutiny. A new report by Common Sense Media, a nonprofit focused on child safety in technology, has rated Gemini’s kids and teen experiences as “High Risk.”
While Google has touted Gemini as an advanced conversational AI with safety guardrails, the report highlights troubling gaps — especially when it comes to protecting children from harmful or inappropriate content.
This raises critical questions for parents, educators, and tech leaders: Can AI platforms like Gemini truly be safe for younger users? Let’s explore what the assessment revealed, why it matters, and what families should do next.
Common Sense Media’s Risk Assessment
According to the report, Google’s Gemini AI does some things right — for example, it clearly states it’s a computer, not a friend. This is important because AI “friendship” has been linked to delusional thinking and emotional harm in vulnerable users.
However, the organization flagged several major concerns:
- Adult AI under the hood: The “Under 13” and “Teen Experience” versions of Gemini appear to be essentially the adult version with extra filters, not AI built from the ground up for kids.
- Unsafe content exposure: Gemini could still share inappropriate or harmful material such as references to sex, drugs, alcohol, and unsafe mental health advice.
- One-size-fits-all approach: Younger users need different guidance than older teens, but Gemini doesn’t sufficiently adapt responses to different age groups.
- High overall risk rating: Both kids and teen tiers received a “High Risk” label, despite Google’s added safeguards.
Why This Matters for Families
AI safety isn’t just about avoiding offensive content. It’s about long-term mental health and well-being, especially for younger users.
Recent cases have shown how unsafe AI interactions can turn dangerous:
- A wrongful death lawsuit was filed against OpenAI after a 16-year-old died by suicide following prolonged conversations with ChatGPT.
- AI companion platforms like Character.AI have been linked to reinforcing unhealthy behaviors in vulnerable teens.
For parents, this means vigilance is key. Even with filters, AI isn’t a substitute for human guidance or mental health support.
Apple’s Interest in Gemini Raises Stakes
Adding to the controversy, reports suggest that Apple may use Gemini as the large language model (LLM) powering its upcoming Siri update.
If true, this could expose millions of teens and younger users to Gemini-powered experiences by default on iPhones and iPads. Without stronger safeguards, the risks flagged by Common Sense Media could scale dramatically.
Google’s Response to the Assessment
Google has pushed back against the “High Risk” label, noting that it:
- Implements specific safeguards for under-18 users.
- Regularly red-teams Gemini with outside experts to improve safety.
- Has filters to prevent relationship-like interactions with the AI.
- Is actively adding new safeguards after acknowledging some responses “weren’t working as intended.”
Still, critics argue that layering filters on an adult AI model isn’t enough. Kids need AI designed for them, not a watered-down version of tools built for adults.
How Gemini Compares to Other AI Models
This isn’t the first AI platform to face scrutiny. Common Sense Media has rated other popular AI tools:
- Meta AI and Character.AI – “Unacceptable risk.”
- Perplexity AI – “High risk.”
- ChatGPT – “Moderate risk.”
- Claude (Anthropic) – “Minimal risk” (designed strictly for 18+ users).
This puts Gemini in the “high risk” category alongside Perplexity, making it far from the safest option for kids.
What Parents Should Do Now
Parents can take proactive steps to protect their children while still embracing AI responsibly:
- Supervise AI use: Treat AI like any other online tool — monitor conversations and set boundaries.
- Use parental controls: Enable built-in safety features and content filters.
- Encourage open dialogue: Teach kids that AI is not a friend or therapist, but a tool that can make mistakes.
- Rely on professional help: For mental health or emotional struggles, AI should never replace a trained professional.
Key Takeaways
- Gemini AI for kids and teens has been rated “High Risk” by Common Sense Media.
- Risks include unsafe content exposure, one-size-fits-all responses, and lack of kid-specific design.
- Google acknowledges issues but insists it’s improving safety protections.
- With Apple reportedly eyeing Gemini for Siri, the debate over AI safety for children will only intensify.
FAQs
1. Why did Common Sense Media label Gemini AI “High Risk”?
Because Gemini’s kids and teen versions still expose users to unsafe content and lack child-specific design, despite added filters.
2. Is Gemini AI safe for kids under 13?
No. The report suggests Gemini can still deliver inappropriate or harmful material, making it risky for younger users.
3. How does Gemini compare to ChatGPT and Claude?
Gemini is rated “High Risk”, ChatGPT is “Moderate,” and Claude is “Minimal Risk” (for 18+ users only).
4. Will Apple really use Gemini for Siri?
Reports indicate Apple is considering Gemini for future Siri updates, which could expand its reach among teens and kids.
5. What can parents do to keep kids safe when using AI?
Supervise usage, enable parental controls, encourage open conversations, and seek professional help for emotional challenges.

