vit-pytorch
vit-pytorch copied to clipboard
PyTorch 2.0 support
Great work! But when I use torch==2.0.0, I find that compilation for ViT fails. I get a warning: [2023-03-27 12:49:31,505] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) function: 'forward' (/opt/conda/lib/python3.8/site-packages/vit_pytorch/vit.py:19) reasons: ___check_obj_id(self, 140301176861456) to diagnose recompilation issues, see https://pytorch.org/docs/master/dynamo/troubleshooting.html. This indicates that pytorch gives up compiling the model. Why does it happens and is there any solutions? Thanks!
+1
Face the same problem.
https://mmdetection.readthedocs.io/zh_CN/latest/notes/faq.html
this may help.
Input images to the network are fixed shape, not multi-scale. set torch._dynamo.config.cache_size_limit parameter. TorchDynamo will convert and cache the Python bytecode, and the compiled functions will be stored in the cache. When the next check finds that the function needs to be recompiled, the function will be recompiled and cached. However, if the number of recompilations exceeds the maximum value set (64), the function will no longer be cached or recompiled. As mentioned above, the loss calculation and post-processing parts of the object detection algorithm are also dynamically calculated, and these functions need to be recompiled every time. Therefore, setting the torch._dynamo.config.cache_size_limit parameter to a smaller value can effectively reduce the compilation time In MMDetection, you can set the torch._dynamo.config.cache_size_limit parameter through the environment variable DYNAMO_CACHE_SIZE_LIMIT. For example, the command is as follows: