Divide and Contrast Explained!

This is an interesting strategy to utilize clustering in the contrastive self-supervised learning pipeline. The three-stage pipeline trains local expert models that have a better signal for representation learning due to the cluster assignments! Paper Links: Divide and Contrast: BYOL: SwaV: SCAN: Yannic Kilcher’s explanation of SCAN: Keras Code Examples of SCAN: Self-Damaging Contrastive Learning: Chapters 0:00 Paper Title 0:04 Heavy-tailed unlabeled data 1:05 DnC Algorithm 3:26 MoCLR Design 5:55 Expert Distillation 7:49 Results 12:28 Algorithm Ablations 13:20 Clustering in Self-Supervised Learning 15:35 Class Imbalance in Self-Supervised Learning Thanks for
Back to Top