Markson-Young
Markson-Young
> Hi, also sorry for my late reply. > > printing `target_layers` shows as: `[Conv2d(59, 59, kernel_size=(5, 5), stride=(1, 1))]` > > Using `torchsummary` to get the expected output for...
@ross-Hr Hello! This question has been bothering me for a few days, and I tried a lot of things but I couldn't solve it. I carefully ran through and compared...
I found that my own model was registered `register_forward_hook` in the process of visualization using grad-cam, but it was not executed in return `self.model(x)` in class `ActivationsAndGradients`, which means `save_activation`...
Thank you for your reply. Grad-cam caught my eye in some papers, and I wanted to implement reliable feature visualizations in my own models. So, I tried to use grad-cam...
Before I used grad-cam, I tried using `register_forward_hook` to extract features from the middle layer of my model myself, and it worked. ``` hooks = [ model.backbone_r.layer3.register_forward_hook( lambda self, input,...
I got the same error, error message as follows: /HOME/scw6580/.conda/envs/MMEngine/lib/python3.9/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by...
There is no problem with single-card training, which occurs when training with two or more GPUs. Here are my training cmd: *sbatch -p gpu_4090 --gpus=2 tools/dist_train.sh configs/dino/dino_4scale_r50_8xb2_12e_coco.py 2*
> Hi, the error message provided is not enough to find out the failure reason. BTW, you can install it with the pre-built packages by the following commands. > >...