Visual Learning with Limited Labels

CVPR 2020 Workshop, Seattle, Washington

Deep learning has shown remarkable success in many computer vision tasks, but current methods typically rely on very large amounts of labeled training data and sufficient sample coverage of every training category (different viewing angles, lighting conditions, etc.) to achieve high performance. Collecting and annotating such large training datasets is costly, time-consuming, and in many cases impractical, as for certain tasks only a few or no examples at all may be available. This issue of availability of large quantities of labeled data becomes even more severe when considering visual classes that require annotation based on expert knowledge (e.g., medical imaging), classes that rarely occur, or object detection and instance segmentation tasks where the labeling requires more effort. The goal of this workshop is to bring together researchers from computer vision and machine learning to discuss emerging new technologies related to visual learning with limited labeled data, including methods for zero-shot and few-shot learning, active learning, unsupervised pre- training, semi-supervised learning, weakly-supervised learning, and others.

Check the arxiv paper related to our cross-domain few shot learning challenge

See also our ICCV 2019 Tutorial on Learning with Limited Labels