-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inability to achieve results #8
Comments
Hi @tanlingp, Could you provide more details regarding what you meant by "The inception model I reproduced couldn't do what you did"? Typically, Inception models expect 299x299 input resolution. However, due to constraints with the Stable Diffusion model (failed to handle an odd number), we conducted our experiments at a resolution of 224x224. Concerning the impact of this resolution change, it may introduce minor fluctuations in results but should not significantly affect the overall experiment conclusions. If you require a 299x299 output, you might consider an alternative approach: first generate an image at a resolution compatible with Stable Diffusion (e.g., 304), then resize it to 299x299. Hope this helps. |
This inception model is 6% less attackable than your paper for resnet50 and vgg19 |
Could you provide more details, like input resolution, specifics about the Inception model (e.g., whether it's the PyTorch default), and any other relevant hyperparameters? |
The experimental setup is all according to the code you provided. Normally the inception model input seems to be 229✖229, here it is 224✖224, what kind of setup do you have please! |
This seems unusual. 🤔 We'll retest the code in this repository to check if there is any potential bug caused by the code cleanup phase. Stay tuned for updates. |
Hi @tanlingp, I've re-run the code in this repository, and it appears to be functioning correctly. To expedite the process, I divided the 1000 images into 8 parts and executed them on an 8-4090-GPU server. The only modifications I made were to the parameters "images_root," "label_path," and "pretrained_diffusion_path" in main.py to point to my local dataset/pretrained weight paths. The results are as follows:
Due to differences in the environment, the results may not perfectly align with those in the paper, but they are generally similar. |
Okay, thanks for the reply. It could be a matter of environmental differences |
🤔 I'm not entirely convinced that the environment difference alone would account for such a significant 6% variation in your reproduction (based on the above results, it seems to cause only minor changes). Please keep me posted if you uncover any other factors. |
I will continue to look at the issue. Will contact you if I find it. |
The inception model I reproduced couldn't do what you did. We usually have 229✖229 as input for that model, here it is 224✖224. does this have any effect please? Looking forward to your reply.
The text was updated successfully, but these errors were encountered: