AI tool aims to catch abuse in group homes after shocking video

A whistleblower’s video exposing the abuse of an autistic teenager at a well-known autism center in New York has led to criminal charges—and inspired a father to create an AI-based solution to protect vulnerable individuals from future harm.

What we know:

Late last year, a whistle-blower released a shocking video of a caregiver at the Anderson Center for Autism near Poughkeepsie assaulting an autistic teenage boy.

The caregiver, Garnet Colins, was arrested and ped guilty to endangering the welfare of an incompetent or physically disabled person.

Anil, the father of the child in the video, is now leading an effort to prevent similar incidents through technology. He has launched Guardian Watch AI, a real-time computer vision system that uses artificial intelligence to monitor video feeds, detect violent or abnormal behavior, and preserve evidence.

Why you should care:

According to Anil, an estimated 80 to 85% of abuse in the disabled community goes unreported. Cameras are not widely used in group homes, largely due to concerns about violating HIPAA privacy laws.

Supporters of Guardian Watch AI argue that AI-enhanced cameras can actually serve to protect both residents and staff, providing context and clarity in moments that may otherwise be misinterpreted or go unnoticed entirely.

How it works:

In a simulated training scenario, the Guardian AI platform noticed test participant pretending to strike Anil and immediately flagged the interaction as abnormal and generated a report.

The software notifies mandated reporters, who then determine whether an incident qualifies as abuse or another form of outburst, such as sensory overload or stimming behavior.

Anil said the platform will soon offer explainable AI features that describe what the software detected—for example, noting two people in a room, what they were wearing, who was seated or standing, and what action occurred.

Some organizations have resisted the use of cameras in residential care settings, citing privacy risks. However, Anil says that Guardian Watch AI is hardware-agnostic and can work with any video system, including cellphone footage.

He believes the platform will not only hold abusive caregivers accountable but also support good staff who may be wrongfully accused.

"The goal here is also to have cameras so that it protects not only the individuals, but also the staff," Anil said.

What's next:

Guardian Watch AI is still in its early stages but already shows about 70% accuracy in detecting violent incidents. Anil hopes to improve that to 90–95% as the system is trained on more data.

New YorkTechnology