An AI model called Segment Anything Model (SAM), that can recognise specific objects in an image has been made available by Meta. A dataset of image annotations that Meta claims to be the largest of its sort to date has also been published along with the model.
SAM is built to recognise items in pictures and videos even if it hasn’t seen them before during training. The paradigm enables users to choose items by clicking on them or by typing text commands like “cat.” In a test, SAM responded to the stated prompts by precisely drawing boxes around several cats in a photo.
In a recent blog post, Meta’s research division explained that the business has created the Segment Anything Model, an advanced object recognition model. SAM is built to recognise items in pictures and videos even if it hasn’t seen them before during training.
Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities
Internally, Meta has been employing SAM-like technologies to identify pictures, filter out objectionable content, and suggest articles to Facebook and Instagram users. According to the company, the distribution of SAM will increase access to this kind of cutting-edge technology outside of their own internal operations.
The company has made the SAM model and dataset available for download under a non-commercial license. However, those who upload their own images to the accompanying prototype must agree to use the tool only for research purposes.