atomtony

Results 10 comments of atomtony

modify file: experiments/mpii/hrnet/w32_256x256_adam_lr1e-3.yaml TRAIN: BATCH_SIZE_PER_GPU: 16

INFO:mmcv.runner.runner:workflow: [('train', 5), ('val', 1)], max: 65 epochs INFO:mmcv.runner.runner:Epoch [1][100/125] lr: 0.10000, eta: 0:16:44, time: 0.125, data_time: 0.015, memory: 858, loss: 2.3419 INFO:mmcv.runner.runner:Epoch [2][100/125] lr: 0.10000, eta: 0:10:46, time: 0.059,...

> i found that the core training phrase is done in the mmcv module(in my machine,is at /xxxxxxx/miniconda3/lib/python3.7/site-packages/mmcv-0.4.3-py3.7-linux-x86_64.egg/mmcv/runner/runner.py), > > `def train(self, data_loader, **kwargs): > self.model.train() > self.mode = 'train'...

Finally, I modified the training_hooks configuration of the train.yaml file, and the changes are as follows: ``` training_hooks: lr_config: policy: 'step' step: [20, 30, 40, 50] log_config: interval: 100 hooks:...

> > Finally, I modified the training_hooks configuration of the train.yaml file, and the changes are as follows: > > ``` > > training_hooks: > > > > lr_config: >...

> Hey, > Did you use the same training configuration file as example_dataset? ### train.yaml ``` argparse_cfg: gpus: bind_to: processor_cfg.gpus help: number of gpus work_dir: bind_to: processor_cfg.work_dir help: the dir...

> > Finally, I modified the training_hooks configuration of the train.yaml file, and the changes are as follows: > > ``` > > training_hooks: > > lr_config: > > policy:...

What version of python?

> python json2prototxt.py --mx-json ./mnet.25/mnet.25-symbol.json --cf-prototxt ./mnet.25.prototxt > python mxnet2caffe.py --mx-model ./mnet.25/mnet.25 --mx-epoch 0 --cf-prototxt ./mnet.25.prototxt --cf-model ./mnet.25.caffemodel > > base_conv_layer.cpp:170] Check failed: channels_ % group_ == 0 (64 vs....