Logging SimCLR losses
Hi,
I couldn't find any issues related to this, how can I make MMSelfsup to log SimCLR (distributed) loss values?
Thanks in advance
Did you mean the log file? Our log file in the work_dir records the loss value.
@fangyixiao18 Yes, the log file. It seems that neither .log nor .json files contains loss information.
The .log file contents:
Name of parameter - Initialization information backbone.encoder.0.0.convs.0.conv.weight - torch.Size([64, 3, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.encoder.0.0.convs.0.bn.weight - torch.Size([64]): The value is the same before and after calling
init_weightsof SimCLR ... neck.bn0.weight - torch.Size([128]): The value is the same before and after callinginit_weightsof SimCLR
neck.bn0.bias - torch.Size([128]): The value is the same before and after callinginit_weightsof SimCLR
The .json file contents:
{"env_info": "sys.platform: linux\nPython: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:18) [GCC 10.3.0]\nCUDA available: True\nGPU 0,1,2,3: NVIDIA A10G\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 11.0, V11.0.221\nGCC: gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-13)\nPyTorch: 1.11.0\nPyTorch compiling details: PyTorch built with: ... log_level = 'CRITICAL'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\npersistent_workers = True\nopencv_num_threads = 0\nmp_start_method = 'fork'\nwork_dir = './work_dirs/selfsup/histoSimCLR_bs4/'\nauto_resume = False\ngpu_ids = range(0, 4)\n", "seed": 0, "exp_name": "config.py"}
Is your training process still working? or it is stuck after the log you provided?
Closing due to inactivity, please reopen if there are any further problems.