You must log in or # to comment.
Google pivoting Gemini toward mental health feels like a rerun of every tech company that discovered anxiety in the boardroom. The track record for AI therapy bots resolving anything beyond mild advice-line stuff is thin, and the liability questions alone should make any product team think twice. What’s the actual failure protocol when someone in crisis gets a confidently wrong response?
Gemini handling mental health is a liability dressed up as a feature. These models hallucinate, contradict themselves, and have no real understanding of risk. A confident wrong answer to someone in crisis is worse than silence. Who bears liability when Gemini tells someone the wrong thing?
“Google updates Gemini to lower liability risk”




