Trained on a large-scale dataset, similar to other vision foundation models, Segment Anything Model (SAM) can generate fine-grained masks given manually defined visual prompts. Despite its remarkable success, however, it does not easily generalize to the segmentation of flexible objects like garments, since unlike the majority of objects, garments depict complex topological and geometric configurations, involving in particular strong self-occlusions or deformations.
In this internship, we aim to leverage promptable segmentation capability of SAM to the challenging problem of garment image segmentation. Our focus will be on developing a dedicated prompt tuning and learning strategy that generates optimal prompts for SAM, enabling accurate and efficient segmentation of garment images.
We will proceed with the following tasks:
https://mlms.icube.unistra.fr/img_auth_namespace.php/4/4c/Stage-Garment-SAM-2025_En.pdf
(c) GdR IASIS - CNRS - 2024.