Google’s artificial intelligence (AI) has allegedly flagged some parents’ accounts for possible abuse over naked pictures of their sick kids.
According to a father, the tech giant flagged the images as child sexual abuse material (CSAM) after he used his Android smartphone to take photos of an infection on the groin of his toddler. Google closed his accounts and reported him to the National Center for Missing and Exploited Children (NCMEC), thus spurring a police investigation.
This incident highlights the complications involved in identifying the difference between an innocent photo and potential abuse once it becomes a part of the user’s digital library, whether on their cloud storage or personal device. The incident occurred in 2021 when some hospitals were closed due to the pandemic.
As per the report, the father (whose name was not revealed) noticed swelling in his child’s groin and sent images of the issue at a nurse’s request before a video consultation. The doctor ended up prescribing antibiotics that cured the infection.
The father received a notification from Google two days after taking the photos. It stated that his accounts were locked due to harmful content that was a severe violation of policies of Google and might even be illegal.
Like several internet companies, including Twitter, Reddit, and Facebook, Google uses hash matching with Microsoft’s PhotoDNA to scan uploaded images to identify matches with known CSAM. It led to the arrest of a man in 2012 who was a registered sex offender and had used Gmail to send images of a minor girl.