This is the official PyTorch implementation of our CVPR 2023 paper:
Contrastive Mean Teacher for Domain Adaptive Object Detectors
Shengcao Cao, Dhiraj Joshi, Liang-Yan Gui, Yu-Xiong Wang
In this repository, we include the implementation of Contrastive Mean Teacher, integrated with both base methods Adaptive Teacher (AT, [code] [paper]) and Probabilistic Teacher (PT, [code] [paper]). Our code is based on the publicly available implementation of these two methods.
We follow AT and PT original instructions to set up the environment and datasets. The details are included in the README files.
Here is an example script for reproducing our results of AT + CMT on Cityscapes -> Foggy Cityscapes (all splits):
# enter the code directory for AT + CMT
cd CMT_AT
# activate AT environment
conda activate at
# add the last two lines to enable CMT
python train_net.py \
--num-gpus 4 \
--config configs/faster_rcnn_VGG_cross_city.yaml \
OUTPUT_DIR save/city_atcmt \
SEMISUPNET.CONTRASTIVE True \
SEMISUPNET.CONTRASTIVE_LOSS_WEIGHT 0.05
Similarly, for PT + CMT on Cityscapes -> Foggy Cityscapes (all splits), run the following steps:
# enter the code directory for PT + CMT
cd CMT_PT
# activate AT environment
conda activate pt
# add the last two lines to enable CMT
python train_net.py \
--num-gpus 4 \
--config configs/pt/final_c2f.yaml \
MODEL.ANCHOR_GENERATOR.NAME "DifferentiableAnchorGenerator" \
UNSUPNET.EFL True \
UNSUPNET.EFL_LAMBDA [0.5,0.5] \
UNSUPNET.TAU [0.5,0.5] \
OUTPUT_DIR save/city_ptcmt \
UNSUPNET.CONTRASTIVE True \
UNSUPNET.CONTRASTIVE_LOSS_WEIGHT 0.05
- Other configuration options may be found in
configs
. - To resume the training, simply add
--resume
to the command. - To evaluate an existing model checkpoint, add
--eval-only
and specifyMODEL.WEIGHTS path/to/your/weights.pth
in the command.
Here we list the model weights for the results included in our paper:
Dataset | Method | mAP (AP50) | Weights |
---|---|---|---|
Cityscapes -> Foggy Cityscapes (0.02 split) | PT + CMT | 43.8 | link |
Cityscapes -> Foggy Cityscapes (0.02 split) | AT + CMT | 50.3 | link |
Cityscapes -> Foggy Cityscapes (all splits) | PT + CMT | 49.3 | link |
Cityscapes -> Foggy Cityscapes (all splits) | AT + CMT | 51.9 | link |
KITTI -> Cityscapes | PT + CMT | 64.3 | link |
Pascal VOC -> Clipart1k | AT + CMT | 47.0 | link |
In addition to integrating CMT with AT or PT, we have also made some necessary changes in their code:
-
Adaptive Teacher (AT)
- For the VGG backbone, our code loads weights pre-trained on ImageNet. The weights are converted from
Torchvision
and can be downloaded here. Please put this file atcheckpoints/vgg16_bn-6c64b313_converted.pth
. - For Cityscapes and FoggyCityscapes datasets, our code creates a cache file for converted annotations when processing the dataset for the first time. Later experiments will directly load that cache file, which greatly accelerates the procedure of dataset building. You may also directly download the cache file here. Also, we disable the segmentation mask loading by default to further speed up preprocessing. Check
adapteacher/data/datasets/cityscapes.py
andadapteacher/data/datasets/cityscapes_foggy.py
for more details. - We fix a bug regarding checkpoint loading: facebookresearch/adaptive_teacher#50
- For the VGG backbone, our code loads weights pre-trained on ImageNet. The weights are converted from
-
Probablistic Teacher (PT)
- We include datasets for Foggy Cityscapes with the 0.02 split:
VOC2007_foggytrain_0.02
andVOC2007_foggyval_0.02
. - We change the
resume_or_load()
function intrainer.py
so that it can correctly resume an interrupted training.
- We include datasets for Foggy Cityscapes with the 0.02 split:
If you use our method, code, or results in your research, please consider citing our paper:
@inproceedings{cao2023contrastive,
title={Contrastive Mean Teacher for Domain Adaptive Object Detectors},
author={Cao, Shengcao and Joshi, Dhiraj and Gui, Liang-Yan and Wang, Yu-Xiong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={23839--23848},
year={2023}
}
This project is released under the Apache 2.0 license. Other codes from open source repository follows the original distributive licenses.