
Meta has rolled out the second iteration of SAM (Segment Anything Model). Unlike its predecessor, SAM 2 can label and follow objects in videos, taking a leap ahead from SAM’s initial limitation to images.
This ability to isolate an object and follow it as it enters and exits the frame is a breakthrough for editing software, achieved through a process called ‘segmentation.’
Segmentation occurs when AI (artificial intelligence) associates pixels in an image with a certain object. Training an AI to master this practice is a herculean effort. Meta, for one, shared a database containing 50,000 videos that were part of SAM 2’s training process.
An exhaustive amount of computing power is spent running SAM 2, so while it’s good to have it for free for now, this shouldn't be expected to remain a constant.
Read more: Meta launches AI Studio in the US: Creators can create custom AI chatbots
Access to video segmentation will enhance video editing software, allowing the manipulation and placement of objects in film, something current software does not offer.
On a grander scale, Meta hopes to have SAM 2 involved in the refinement of computer vision systems for autonomous cars. SAM 2’s technology could be used to identify moving objects and changing environments, thus enhancing visual data processing and training processes.
The current hype around AI is mostly due to text-to-video prompt support by industry players like OpenAI or Google. Though the general public may not see it coming now, this technology is a significant leap for those who care, while others will learn its value in time.