Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training from scratch getting worse results #7

Open
JunjieLiuSWU opened this issue Aug 18, 2022 · 2 comments
Open

Training from scratch getting worse results #7

JunjieLiuSWU opened this issue Aug 18, 2022 · 2 comments

Comments

@JunjieLiuSWU
Copy link

Hello, thank you for your sharing. I trained from scratch on your provided Cityscapes data for 20 epochs, but I got a worse result than manydepth, I wonder is DynamicDepth must be trained from pre-trained models? and how many epochs have you trained?

@fengziyue
Copy link
Member

Hi @JunjieLiuSWU :

It does not have to be trained from Manydepth pre-trained models.

However, training from scratch does need more hyperparameter tuning. Our model could be considered as Manydepth plus several improvements(DOMD, Occlusion-aware cost-volume/loss). We find out that first training the Mandydepth and then enabling our improvements later will make the training more stable. Actually, Manydepth itself is hard to re-produce from scratch(see here). So we recommend using their pre-trained model.

The configuration in the option.py is for training from Mandydepth pre-trained model. If you want to train from scratch, you can try to first disable our DOMD module, train for 20~40 epochs, then enable them and train for several epochs.

@WangXuCh
Copy link

Hi @JunjieLiuSWU :

It does not have to be trained from Manydepth pre-trained models.

However, training from scratch does need more hyperparameter tuning. Our model could be considered as Manydepth plus several improvements(DOMD, Occlusion-aware cost-volume/loss). We find out that first training the Mandydepth and then enabling our improvements later will make the training more stable. Actually, Manydepth itself is hard to re-produce from scratch(see here). So we recommend using their pre-trained model.

The configuration in the option.py is for training from Mandydepth pre-trained model. If you want to train from scratch, you can try to first disable our DOMD module, train for 20~40 epochs, then enable them and train for several epochs.

So, how many Epochs did you train on the pre trained model provided by Manydepth, and did you freeze the teacher network (posenet and mono_depth) during this period.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants