You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your excellent work again! I found that you guys fine-tune the PWC-net in Sintel and KITTI training datasets, and I found a large performance gap between your fine-tuned PWC-net and the original one in dataset "demo-kitti", could you please share the fine-tuned pareameter file? This will be a huge help for me, thanks!
The text was updated successfully, but these errors were encountered:
The fine-tuning of PWC-net is done in the original PWC paper itself as far as I understand.
Did you manage to resolve your issue? What was the cause?
From the original PWC-Net paper, https://arxiv.org/pdf/1709.02371.pdf
"We first train the models using the FlyingChairs dataset
in Caffe [28] using the Slong learning rate schedule introduced in [24], i.e., starting from 0.0001 and reducing the
learning rate by half at 0.4M, 0.6M, 0.8M, and 1M iterations. The data augmentation scheme is the same as that
in [24]. We crop 448 × 384 patches during data augmentation and use a batch size of 8. We then fine-tune the models on the FlyingThings3D dataset using the Sf ine schedule [24] while excluding image pairs with extreme motion
(magnitude larger than 1000 pixels). The cropped image
size is 768 × 384 and the batch size is 4. Finally, we finetune the models using the Sintel and KITTI training set and
will explain the details below"
Thanks for your excellent work again! I found that you guys fine-tune the PWC-net in Sintel and KITTI training datasets, and I found a large performance gap between your fine-tuned PWC-net and the original one in dataset "demo-kitti", could you please share the fine-tuned pareameter file? This will be a huge help for me, thanks!
The text was updated successfully, but these errors were encountered: