WindVChen
WindVChen
Sometimes it may be because the given input image itself is a difficult sample for the classifier. In this case, both the original clean image and the attacked image will...
Glad to see the problem solved 👍 .
Hi @junyizeng , It seems that there is a problem with the Internet connection. Maybe you can check some possible solutions [here](https://github.com/huggingface/transformers/issues/10067).
Since I'm unfamiliar with AutoDL platform, I'm unsure whether the way I run it with local files will bring some inspiration. You can first download the files by directly running...
Hi @yuuma002 , Thanks for your attention. If still fail to download the weight automatically, maybe you can try to download it manually. Try to download all the files [here](https://huggingface.co/stabilityai/stable-diffusion-2-base/tree/main)...
Hi @tanlingp, Could you provide more details regarding what you meant by "The inception model I reproduced couldn't do what you did"? Typically, Inception models expect **299x299** input resolution. However,...
Could you provide more details, like input resolution, specifics about the Inception model (e.g., whether it's the PyTorch default), and any other relevant hyperparameters?
This seems unusual. 🤔 We'll retest the code in this repository to check if there is any potential bug caused by the code cleanup phase. Stay tuned for updates.
Hi @tanlingp, I've re-run the code in this repository, and it appears to be functioning correctly. To expedite the process, I divided the 1000 images into 8 parts and executed...
> This inception model is 6% less attackable than your paper for resnet50 and vgg19 🤔 I'm not entirely convinced that the environment difference alone would account for such a...