Apple announces that it plans to launch a new artificial intelligence (AI)-powered scanning feature in the UK to detect abusive content for child safety.
The feature can automatically recognize and blurs sexually explicit photographs sent to young users via Apple’s Messages app.
Apple said that the feature uses on-device artificial intelligence to ensure privacy, with all interventions processed on-device without contacting Apple or anyone else. Apple had introduced the same feature in the United States last year.
Read More: StoryFile launches AI-powered ALS Educational Platform for Public
The “communication safety in Messages” feature allows parents to enable alerts for their children’s iPhones. If a child receives a photo with the setting switched on that contains nudity, the photo will be blurred, and the child will be cautioned that it may contain sensitive content and directed to resources from child safety organizations.
In contrast, when a child tries to send such content, the feature encourages the user not to send the images with an additional option of messaging a grown-up. However, it has been mentioned by Apple that it would not send alerts to parents without the child’s consent. Multiple Apple devices running on iOS, iPadOS, WatchOS, and macOS will support this AI feature.
Apple, in a statement, said, “Messages analyzes image attachments and determines if a photo contains nudity while maintaining the end-to-end encryption of the messages.” The statement further mentioned that the technology is designed to ensure that no evidence of nudity detection ever leaves the device and that Apple does not have access to the messages.
The feature will soon be rolled out via a software update.