fvd-comparison
fvd-comparison copied to clipboard
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.dropbox.com', port=443)
hi, when i use your code, i found this problem: requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.dropbox.com', port=443): Max retries exceeded with url: /s/ge9e5ujwgetktms/i3d_torchscript.pt?dl=1 (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))) what should i do? And i have a another question,the videos resolution must be 224x224?If my video resolution is 128x128,how can i do? Thanks!
Hi, sorry for replying so late. Could it be the case that you had some connection or VPN issues? Which script exactly do you run?
Hi, sorry for replying so late. Could it be the case that you had some connection or VPN issues? Which script exactly do you run?
When I run the function compute_our_fvd and try to get the detector, I met with the following problem: requests.exceptions.SSLError: HTTPSConnectionPool(host='www.dropbox.com', port=443): Max retries exceeded with url: /s/ge9e5ujwgetktms/i3d_torchscript.pt?dl=1 (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)')))
Looking forward to your suggestions!
A workaround which you can do is to download the detector https://www.dropbox.com/s/ge9e5ujwgetktms/i3d_torchscript.pt?dl=0
manually, save it into /some/directory/location/i3d_torchscript.pt
and then replace the link with the file path here:
- https://github.com/universome/fvd-comparison/blob/master/compare_metrics.py#L33
- https://github.com/universome/fvd-comparison/blob/master/compare_models.py#L34
Thanks! BTW, I also have the question about the data resolution. Does the i3d model support other resolutions besides [16,224,224]?
Hi! It can do resizing under the hood. Here is how we use it in the main repo:
https://github.com/universome/stylegan-v/blob/master/src/metrics/frechet_video_distance.py#L23
If you ask whether it's possible to apply it on different resolution without resizing, then it's possible to do only if you alter the source code of the original model — its final prediction layers are adapted to correspond to 224x224 input resolution. You can check its source code here:
- https://github.com/hassony2/kinetics_i3d_pytorch/blob/master/src/i3dpt.py#L261
Thanks!!!!
Can I evaluate videos less than 16 frames, like 10 frames?
Hi @martinriven , There is some limit in how many frames one can use since otherwise the sequence length will get smaller than the kernel size of the Conv3D pyramid, and I do not remember what that limit was. What I would do in your case is just to try different sequence lengthes for the model and see when it fails :)
Also, your can linearly interpolate the sequences to fit into 16 frames or pad them with mirror/zero padding