Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance on cityscapes #5

Open
EchoAmor opened this issue Jan 21, 2019 · 8 comments
Open

Performance on cityscapes #5

EchoAmor opened this issue Jan 21, 2019 · 8 comments

Comments

@EchoAmor
Copy link

Thanks for your work!I am a student .
I have trained your model on Cityscapes , I only modified train.py and config.py in the directory /deeplabv3plus-pytorch/experiment/deeplabv3+voc/ and dataset's path in cityscapes.py in /datasets/,and used 2 gpus.But the results is very bad ,test results only get 19.89% ,and I see images during training by tensorboardX, the images seems lose some classes.

Did I do something wrong ?Is there anything I haven't modified? Hope for your respond! Thanks very much!

1
2
3

@YudeWang
Copy link
Owner

I see the training step in tensorboardX is only 8500 which is not enough. I set 46 epochs for PASCAL VOC 2012 which equal to about 30k iterations. For Cityscapes dataset, my experiment set epochs=160 with batchsize=8. You can change the setting according to your devices but please keep enough training iteration.

In deeplabv3+ origin paper, it has said that the global average pooling branch of ASPP will hurt the performance on cityscapes, please remove it.

The repository has been update recently, please update to the new version. Thanks for your comment!

@YudeWang YudeWang changed the title very low mIOU on cityscapes performance on cityscapes Jan 22, 2019
@YudeWang YudeWang changed the title performance on cityscapes Performance on cityscapes Jan 22, 2019
@EchoAmor
Copy link
Author

Thanks!I 'll try it again today.

@EchoAmor
Copy link
Author

407
406
405
404
403
402
401

Hello! I have modified to 200 epoches and updated your work .Still 2 GPUs. But I didn't see the global average pooling branch of ASPP,the results are still very low. Did you mean I need to remove ASPP? I don't know how to solve this problem.Have you trained on cityscapes? How about your results?

@Arenops
Copy link

Arenops commented Jan 25, 2019

Trained on cityscapes with 4 GPUs and batches==20 and lr==0.0006 in config.py in deeplabv3+voc, get mIOU 24.867% after 32 epochs. Loss is remain 0.3-0.6 and no longer reduce.
@YudeWang any advices? Thanks. Still training and will update once get better results.

@YudeWang
Copy link
Owner

@EchoAmor Global average pooling branch is branch5 in ASPP.py. My experiment was did three month ago achieving 75% mIoU, and I believe new model can have better performance. I will retest it during winter vacation.

@Arenops Your initial learning rate is too small (I use 0.007) and iteration is not enough (>30k). Please check recommendation above.

@EchoAmor
Copy link
Author

Thanks for your patience very much! I'll keep trying and update my results. @YudeWang

@EchoAmor
Copy link
Author

@Arenops Hello! Have u got results now? Can u share your results? Thanks a lot !

@Flames60
Copy link

@Arenops Hello! Have u got results now? Can u share your results? Thanks a lot !

Hello, how about your performance? What is your lr, cropsize and pretrain-model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants