Logo-amall

I leave links to two relevant papers here that I find interesting. Also will need to update the awesome metric learning repo soon https://arxiv.org/abs/2204.00570 https://arxiv.org/abs/2204.02683

Last active a year ago

7 replies

5 views

  • YU

    I leave links to two relevant papers here that I find interesting. Also will need to update the awesome metric learning repo soon
    https://arxiv.org/abs/2204.00570
    https://arxiv.org/abs/2204.02683

  • AN

    do you have a ELI5 summary?

  • YU

    Well, let me try. They are both related to unsupervised domain adaptation, where we have labeled data from the source domain, unlabeled data from the target domain but no labeled data from the target domain. In this case, contrastive pre-training, which is an example of unsupervised learning, helps the model learn features generalizable across source and target domains. These features can be used for zero-shot tasks as in metric learning as well as learning linear classifiers without labeled data in the target domain. I thought it might be relevant to easily finetuning usable models in data scarce scenarios. It's like the brain relying on resemblance to associations that it learned from a different context earlier.

  • AN

    contrastive pre-training, which is an example of unsupervised learning
    doesn't use labelled data from the source domain? If so, why is it unsupervised?

    Also how to fine-tune without labels?

  • YU

    Contrastive learning is unsupervised because the training setup used for such models benefits from a augmentation pipeline to generate pairs from unlabeled data on the fly. They apply an augmentation function to images and obtain their noisy counterparts, and thus they can learn useful features by teaching the model to recognize such pairs. A well known example is Momentum Contrast (MoCo), which gives state-of-the-art results. Labeled data in the source domain are only used in the finetuning phase. As the model has already learned good associations between visual features, even if we finetune the model with the labeled data in the source domain, the model can transfer this knowledge when classifying images in the target domain. For example, when we finetune a model with photos of dogs, it can also correctly classify sketches of dogs because it has useful features to make connections between photos and sketches in a broader sense.

  • AN

    oh, i thought Contrastive learning is somewhat related to the Contrastive Loss😆

  • YU

    The terminology is really full of mind traps 😁

Last active a year ago

7 replies

5 views